article
stringlengths 0
456k
| abstract
stringlengths 0
65.5k
|
---|---|
a model was established in a previous paper , ` gravitational radiation reaction and the three body problem ' ( wardell 2002 ) , in which the motion of a binary system was studied .this binary system was subject to gravitational radiation damping and the gravitational influence of a third mass .the equations of motion for the relative motion of the binary were derived with certain approximations that would highlight the effects under investigation and also make analysis easier . for example, the motion of the three masses is taken to be planar and the center - of - mass of the binary moves in a fixed circular orbit around the third mass .furthermore , the distance of the third mass from the binary s center - of - mass is taken to be substantially larger than the size of the relative orbit of the binary .the masses are considered to be point masses . after a scaling transformation that makes the variables dimensionless ,one arrives at the following form of the equation of motion in cartesian coordinates : the variable represents the relative orbit , is the tidal matrix that retains the information about the tidal interaction between the third mass and the binary system , and is the radiation reaction term that expresses the radiation reaction force to desired order after iterative reduction . to avoid ` runaway solutions ' that can arise as the result of the radiation reaction perturbation which involves a fifth time derivative , one can apply the method of iterative reductionthis gives rise to a set of equations that is second order and resembles an equation of motion in newtonian mechanics ( chicone et al .one recovers the differential equation for the appropriate kepler problem if . numerical analysis of the system has revealed that given appropriate initial conditions , one arrives at a result which shows resonance behavior in the relative orbit .when the graph of the delaunay variable , where such that is the semimajor axis of the osculating ellipse of the relative orbit , is plotted versus time one sees that the trend of semimajor axis decay can temporarily stop on average at a resonance .an oscillation occurs around a fixed average value for for the duration of this resonance .this resonance capture is indicative of an average balance of energy that leaves the binary system by way of gravitational waves and enters the system because of the tidal gravitational influence of the third mass .this paper concerns itself with the behavior of the system when a resonance occurs ; that is , when the resonance condition is satisfied , where and are relatively prime integers and and are the angular frequencies of the relative motion and tidal perturbation respectively . it turns out that the tidal perturbation frequency relates to the fixed third - body frequency as , where is the frequency of the fixed third - body motion .an averaging method that was developed for the purposes of studying the effect of external incident gravitational waves on a binary system , can be applied to the equations of motion to analyze the behavior near a resonance ( chicone , mashhoon & retzloff 1997 ) .the resultant averaged system of equations retains the ` slow ' variables which are left over after the system is averaged over a ` fast ' variable .the averaged set of nonlinear equations gives rise to a solvable system , whose solution approximates the actual solution for sufficiently small perturbation parameters over a certain time scale .one arrives at the following system of equations upon converting the second - order cartesian equations into a system of first - order equations in polar coordinates : + \delta \frac{p_{r}}{r^{3 } } ( \frac{32}{r } + 24 p_{r}^{2 } + 144 \frac{p_{\theta}^{2}}{r^{2}}),\ ] ] the parameters and denote the tidal and gravitational damping perturbation strengths , respectively .one can in principle generate a solution to this system of differential equations by specifying a set of initial conditions that represent the physical state of the system .the system of equations also includes parameters , , and that characterize the strength of the perturbations and the angular frequency of the tidal perturbation .these parameters can be picked to reflect a specific physical model .in fact , a simplification has been made which , in effect , makes an independent small parameter .the original derivation reveals that the tidal perturbation amplitude is directly proportional to , which may not be small enough in many cases .a generalization is made in this model to accommodate the requirements of the averaging theorem , particularly that the perturbation amplitude be sufficiently small .a series of numerical experiments can be performed in which the system evolves under the influence of the prescribed perturbations and initial state .this process aims to find occurrences of resonance capture in phase space and parameter space of the dynamical system . to accord with the subsequent mathematical analysis ,the delaunay variable is graphed versus time in the numerical work .the variable is related to the semimajor axis of the keplerian ellipse as .therefore it is inversely proportional to the energy of the elliptical orbit as .the standard signature of the resonance capture in terms of versus is an oscillating interval about an average value , with an envelope that generally increases with time before ` falling out ' of the resonance .the average value is related to the order of the resonance .a sustained average orbital energy follows directly from the average .numerical experiments are conducted with the aim of finding resonances of different orders . in this investigation , numerical evidence of , , and is found .see figures 1 - 3 and their captions which show the parameters and initial conditions of the systems .it is not difficult to search for resonances in the parameter space of the system ; in particular , the ( 1:1 ) resonance described here is different from the one represented in a previous paper ( wardell 2002 ) .a cursory look at the graphs will show that there is an increasing degree of structure , especially with regards to the graph .one can see that the resonance contains a ` dense ' orbit with chaotic characteristics .that is , the features of the graph such as the oscillations are apparently irregular as is the amplitude .in contrast , the and orbits are much more regular in their oscillations . the graphs exhibit similar qualitative behavior to which a measurement can be applied .the oscillation during the resonance capture gives rise to an average frequency .let this be .moreover , the amplitude of the _ envelope _ of the oscillation changes with time and yields another measurable quantity .namely , the ratio between the amplitudes of the envelope at two different times is a simple measure of the change of the envelope that surrounds the oscillatory orbit .let this be .the purpose of this paper is to provide analytic expressions for and and an approximate theoretical description of the binary orbit while in resonance .if one neglects the damping terms in the equations of motion ( 2)-(5 ) , one arrives at a hamiltonian system that can be transformed canonically into other coordinate systems .the coordinate transformation from polar to delaunay variables is beneficial because delaunay variables are action - angle variables adapted to the osculating ellipse .the hamiltonian of the unperturbed system depends only on the actions or the canonical momenta .this results in the momenta being constants of the motion and the generalized coordinates being angle variables .the inclusion of perturbations causes deviations from these constants the otherwise constant variables evolve with time . the close relationship that the delaunay variables have with the elliptical elementsconveniently connect the trajectory on the manifold with respect to the delaunay variables with the osculating ellipse description of motion that is associated with elliptical elements .in other words , one has at each point of time a set of coordinates that will follow the keplerian trajectory if the external perturbations were removed .the orbit of the system is aptly described by an evolving ellipse , whose descriptive coordinates change in time . for the planar system under consideration here , the osculating ellipse can be characterized by its semimajor axis , eccentricity , eccentric anomaly , and true anomaly ( danby 1988 ) .the delaunay elements are then defined by , , , and . here and are action variables and the mean anomaly and are the corresponding angle variables . the equations of motion in delaunay variables take the following general form where . this definition is made so that one can fix and have only one small parameter that is needed for the remaining analysis .the averaging theorem relies on the system having one perturbation parameter .the hamiltonian associated with the binary system plus the external perturbation due to the third mass can be expressed as where the unperturbed part is and the tidal perturbation is the functions , , and are given by where ,\ ] ] .\ ] ] the symbol represents a bessel function of the first kind of order and is the eccentricity .the transformation for the general case which includes damping uses the following relationship which accounts for the non - conservative nature of the damping force : where is the reduced mass .one can use this relation to find how the damping terms transform with the new coordinates .this result was derived in a previous study where the damping terms are the same as in this case ( chicone et al .the terms are : ,\ ] ] ,\ ] ] ,\ ] ] .\ ] ] so one has here all of the ingredients to produce the equations of motion in delaunay variables .the system is formally suited for the averaging process .when the perturbation is included , the action variables and as well as the angle variable change in time along with the angle .their time derivatives in the equations of motion ( 6)-(9 ) are equal to an expression of the order of the small parameter ; therefore , they evolve slowly over time .the angle variable maintains its leading term which is of order unity so this describes fast motion in relation to the other variables .in essence , one is interested in the evolution of the slow variables , , and averaged over the fast motion of .the motion of for the unperturbed system is just the ` mean angle ' for the relative orbit .the variable is related to the energy of the orbit by , is the angular momentum , and is the argument of the periastron of the osculating ellipse .averaging over the fast angle brings to light an averaged solution of the slowly evolving variables , particularly observable variables such as . to advance an analytical description of the modelled system, one can take advantage of the qualitative features of the ode namely , the fact that there is one ` fast ' angle which evolves quickly with respect to the ` slow ' variables . the ` slow ' angle variable can be considered to be effectively an action variable .this is key because to meet the conditions of the averaging theorem outlined below , one needs a system with only one fast angle variable . if one makes use of the fact that there is only one ` fast moving ' angle in the system of interest , the averaging theorem can be applied . employing the averaging theorem ,one arrives at an averaged set of differential equations that are solvable . over the correct timescale, one has an approximate analytical solution to the motion of the binary system of interest . to illustrate how averaging is applied using the averaging theoremconsider the following representation of a system of equations cast into action - angle form : where the symbol represents the set of real numbers and represents an n - dimensional real space .the terms with the coefficient are perturbations .furthermore , the evolution of the action variables is slow in time with respect to the angle variable that is fast moving .so one has a case where averaging can be applied .one can average over the angle and create a new differential equation so that to first - order one has a new system in terms of : the new averaged variable coincides with the actual variable at : the averaging theorem states that one can create an averaged system whose solution will closely follow the actual system s solution . that isthe trajectory of the new averaged variable will stay close to the true trajectory within a distance of order .this will be valid within a timescale of .so the following inequalities describe the averaged system where and are constants : inspection of the differential equation in the delaunay variables reveals that the ` fast ' variable is while the others are ` slow ' .it is of interest to study the behavior of the system s motion near a resonance . at a resonance ,as can be seen in the numerical case , the value for stays fixed on average . for a fixed associated with a particular resonance ,one arrives at the following resonance condition in terms of : where .the unperturbed equation for yields the solution where is an integration constant . to highlight the motion near resonance, one can consider the deviations that occur near the resonance .the averaging that one performs at resonance , is called partial averaging . to investigate the behavior of the binary near the resonance manifold makes the transformation this introduces the variable which represents the ` deviation ' from the resonance value of and the variable which is related to a deviation from the unperturbed mean angle .the transformation involves before the variable and unity before the variable so that the transformed ode will contain the differential equations in and with leading terms to the same order .the leading order becomes the small parameter for the averaged system , that is .this gives the proper form for averaging .the derivation involves an expansion around , therefore it is possible to mathematically generate any number of terms in the expansion .since the model equations that reflect the physics under investigation are given to highest order , one does not benefit from a physics point of view to expand beyond in the transformation .after the transformation is made one obtains the following expression for the transformed system at resonance ( chicone et al ., 1997 ) : here are defined by : the next step is to derive the actual second order partially averaged equations for the system . to do this ,one can make use of a special coordinate change known as an averaging transformation .the purpose of such a transformation is to render the system by way of a coordinate change into a form that is averaged to first order . since averaging to second orderis desired here , one needs to average the entire transformed system and drop terms of order higher than second . to apply the averaging transformation one must make the following definitions : such that that is , the average of vanishes .the averaging transformation becomes : this transformation has the property that its average becomes the identity transformation .the averaging transformation changes equations ( 23)-(26 ) into the following : noting as well that the averages of and vanish by construction of the transformation , one can average the new system , drop terms of order and higher , and arrive at the second order partially averaged system : computation of the averages of according to equation ( 43 ) results in the second order partially averaged system : \ ] ] |_{l_{*}}\sin{(2g+m \varphi ) } + \frac{\delta}{3 l_{*}^{3}g^{5}}(146 + 37 e^{2})\},\ ] ] ,\ ] ] |_{l _ { * } } \cos{(2g+m\varphi)}\},\ ] ] ,\ ] ] where . recall that and are functions of the eccentricity . at resonancethe value of is and the type of resonance is governed by the resonance relation . through the integration to determine the averages of in equation ( 43 ) , it is found _ that the only resonances that contribute in the averaging process are the resonances_. one can make a simplification by noticing that the arguments of the trigonometric functions all combine as .one can then define a variable such that .if one multiplies the differential equations for and by the appropriate factor and adds them , one has a reduced system of differential equations that consists of three equations .one of the variables is now an angle .the system reduces to \ ] ] |_{l_{*}}\sin{\phi } + \delta \gamma_{\beta}\},\ ] ] ,\ ] ] |_{l_{*}}\cos{\phi } - \frac{3}{4 } l_{*}^{4 } \frac{\partial}{\partial g}(a_{m}+b_{m})\cos{\phi}\},\ ] ] where the gravitational damping terms are : the result of the coordinate transformations and averaging is a system of three differential equations which describe the averaged motion to second order .the resultant set of odes represents the evolution of these ` slow ' variables .the averaged system derived in the previous section is in the form of an expansion in the powers of the small parameter to second order .one can investigate the two different orders of the partial averaging that are calculated here , and arrive at results that can be compared to those found numerically .the different orders bring to light different features of the solution . to extract the first - order partially averaged system one needs to only keep terms of order , the small parameter of the perturbation expansion around the resonance manifold .one obtains the following system : ,\ ] ] where and follow from equations ( 16 ) and ( 17 ) and is defined in equation ( 63 ) . one immediate consequence of this system , as can be seen in ( 67 ) , is that the orbital angular momentum is constant at resonance ;hence , the eccentricity of the orbit is constant as well . as a result, one can rewrite the system in an equivalent way in terms of two canonically conjugate variables and .the corresponding hamiltonian is : where and are constants the canonical equations of the equivalent system are : two pendulum - like equations of motion result , one in and one in .the one in traces the orbit of the deviation from the resonant value as defined in ( 33 ) and is expressed as : where the differential equation in describes a pendulum with constant torque .of course , to have oscillatory behavior in both cases , the condition must be met .the first - order approximation of the resonance behavior in consists of an oscillation with slowly varying frequency and constant amplitude . a comparison between the analytical formula , particularly that of the frequency ( 75 ) , and the corresponding numerical result can be made to test the quantitative agreement between the two methods .when one considers the second - order averaged equations , one can derive a formula for the damped or antidamped oscillator .it goes as follows : there is also a corresponding equation for .the frequency is the same as in the previous case , but the coefficient to the damping term can be found from {l_{*}}\sin{\phi } + \delta \frac{1}{3g^{5}l_{*}^{3}}(146 + 37e^{2}).\ ] ] this describes an oscillation whose amplitude changes with time due primarily to damping or antidamping .also , is no longer constant in the second order as shown in equation ( 61 ) . while the orbit is in resonance , its orbital angular momentum and eccentricity change slowly over time . for the purpose of finding an approximate solution to the differential equation ( 76 ) one can assume that and are constant .that solution is : the attention will be focused here on because the numerical data to be compared with the averaged results involve the deviations from the resonance value , which is what measures . from this equation, one can see that there is a sinusoidal oscillation multiplied by a time - dependent exponential amplitude .the character of this function depends on the magnitude and sign of given in equation ( 77 ) .the amplitude increases if and decreases if .the magnitude of affects the rate at which the amplitude changes .more generally , itself is a function of time ; this may result in local variations of the sign of .the measured frequency is simply an average frequency determined over several periods of the numerical versus graph .one must keep in mind that the averaging is valid over a time interval of . to measure the ratio between the value of the envelope of at two points , that is , one simply measures the respective values at different times and divides them .how does one compare the numerical results to the analytical ?first of all , it is necessary to recall the derivation of the averaged equations .they are derived under the assumption that the orbit is in resonance . since determines the resonance manifold , the points such that are on the resonance manifold . in the versus the orbit passes through the resonance value as it oscillates .it is these points of the numerical solution that are on the resonance manifold that apply to the analytical formulae for and .recall that the first order averaged equations give a formula for the frequency of the solution .it is .this formula depends on and at the resonance manifold . to produce a value of one needs the values of and on the resonance manifold .in essence , what is being investigated is the agreement between the measured frequency of a resonance recorded in a graph and a prediction made by a derived formula .the derived formula is based on the orbit in resonance . for the purposes of testingthis agreement one can take the values of that occur at resonance in the numerical case and apply them to the derived formula . the solution to the differential equation derived from the second - order case gives rise to a formula that describes the behavior of the envelope of the resonance orbit .to find the analytical formula for this ratio let be a function that describes the envelope of .that is , . one can compute the ratio between and , where , by dividing the two expressions for and .one comes up with . the determination of calls for the numerical resonance manifold values of , , and .the examples that were computed for this study in figures 1 - 3 yielded the following results where is the angular frequency and is the ratio of the amplitudes of the envelope at two different times for the resonance .also , the following definitions are used : the results are : a condition for the averaging theorem to be valid is that be sufficiently small .therefore , when computing the numerical results in search of resonances , one can pose the question of whether the numerical choice of is small enough .how small is sufficiently small ? by lowering the value of , which in turn leads to a more cumbersome and time - consuming numerical calculation , one can most likely improve the numerical results . as can be seen in the summary of results ,the ( 1:1 ) and ( 2:1 ) examples show good agreement between the numerical and analytical results .the ( 3:1 ) case , however , deviates quite far from agreement .it is known from numerical evidence that the higher - order resonances yield more elaborate structure and are more likely to exhibit chaos than low - lying resonances ( chicone et al.,1997 ) . in this case, the numerical results could be precarious in their agreement with analytical predictions .perhaps the choice for was not sufficiently small to meet the conditions of the averaging theorem .librational motion on the phase plane of the first - order model is indicative of an orbit captured into resonance ( chicone et al . , 1997 ) .the pendulum - with - torque system shown in ( 72 ) and ( 73 ) must have an elliptical fixed point to have these librations .furthermore , a hyperbolic fixed point is expected with a homoclinic orbit to give rise to an area in the phase space where orbits pass through resonance . a fixed point on the phase planeis given by , where satisfies : in general , three different scenarios can arise as the result of the pendulum system . for there to be a solution for the inequality must hold .for the cases where or , there are fixed points . consider the case where the inequality holds .the phase cylinder for this shows that there exist orbits that are oscillatory and stay in the resonance capture region , and also orbits that pass through the resonance ( see figure 4 ) .this shows that whether the orbit will be captured into resonance depends on the initial conditions .the case such that can arise where there is no solution to the equation and therefore no fixed point .the first - order system of equations that gives rise to these pendulum - like equations displays in an approximate way the physics of the original astrophysical model . from the first - order system one can see from equation ( 71 ) that the torque is linearly related to the gravitational radiation damping by the coefficient . if there is sufficiently strong damping , and hence a large , there will be no resonance capture because can in all cases exceed unity .all solutions will pass through the resonances and the orbit will follow a path such that the semimajor axis will shrink to zero. this would be the expected result if gravitational radiation damping were the sole perturbation .recall also that the binary system will be captured only into the resonances according to the averaged equations .the second - order averaged system includes antidamping effects as well as a slowly drifting and ( chicone et al .it offers a description of how the actual system can fall out of resonance by leaving the capture region by its anti - damped motion .the full non - linear system exhibits effects like resonance capture and exit from resonance , dynamics whose full description elude current analytical techniques .however , the second - order system explains with some success the system s behavior in resonance , particularly the frequency and the rate of antidamping .there are interesting features in the resonance , particularly its alternating sets of grouped oscillations in an ` excited ' phase and ` relaxed ' phase .the slowly changing variables of , , and , and the fast variable as seen in the delaunay form of the system ( 6)-(9 ) make it a candidate for the phenomenon of bursting .one may speculate from the characteristic features in figure 3 that bursting is present in this resonance ( izhikevich 2001 ) .the phenomenon of resonance capture describes the system s state where the dissipative influence of gravitational radiation damping is offset on the average by the tidal perturbation of the orbiting third mass ; that is , the theory of capture into resonance describes how the orbit s loss of energy via gravitational radiation reaction is countered by a deposit of energy from the orbit of the third mass .the averaging method produces a set of ordinary differential equations that describe the behavior of the system while in resonance .the quantity tracks the oscillatory motion of about .the level of approximation given by the first - order averaged equations highlights the oscillatory characteristics of the actual solution whereas the second - order averaged equations reveals the slow changes in amplitude .the numerical solution shows more structure and detail that is inherent in the actual solution .there are prominent features of the actual solution that one can compare qualitatively and quantitatively to that of the averaged system .though the first - order solution only predicts oscillations of constant amplitude and slowly changing frequency and the second - order solution predicts an exponential trend of damping or antidamping , they correspond on the appropriate timescale to the most prominent features of the numerical solution .this makes sense in the light of what the averaging method intends to do capture the ` slow ' motion .resonance capture and chaos are two signature results that arise from a system in resonance .perhaps the most noted example of this phenomenon is the damped driven oscillator .the astrophysical system that is modelled in this study by a damped and driven kepler system displays similar characteristic effects namely resonance capture . under the non - hamiltonian perturbation ,the nonlinear hamiltonian system can exhibit both resonance capture and chaos ( chicone et al . 2000 ) .this can be manifest in the attempt to compare the numerical case with the analytical .an orbit near a higher - order resonance such as will more likely be chaotic and therefore prone to bear ` erroneus ' results when compared with the partially averaged equations .the dense structure shown in figure 3 that corresponds to suggests such chaotic motion . however , good results followed from the analysis of the and resonances . the averaged systemto first order shows that at each resonance one sees motion like a pendulum with constant torque . to the second order , antidamping and nonlinearity may cause disruption to the resonance .the full nonlinear system includes more effects that are not seen in the averaged systems which may contribute to the disruption of the resonance .the analytical formulas for the first and second - order averaged systems outline structure that is part of the actual solution .it is also of interest to determine how well the results from the analytical part agree with the numerical .the complexity of the actual nonlinear equations and their solution makes analytical formulae helpful and illuminating .both analytical and numerical techniques given here present a consistent description of the nonlinear effects associated with resonance capture . * * + chicone c.,mashhoon b.,retzloff d.g ., 1997 , class .quantum grav . , + 14 , 1831 + chicone c.,mashhoon b.,retzloff d.g . , 2000 , j. phys .a : math . gen . , + 33 , 513 + chicone c.,kopeikin s.,mashhoon b.,retzloff d.g . , 2001 ,a , + 285,17 + danby j.m.a .1988 , fundamentals of celestial mechanics , 2nd ed . , + willmann - bell , richmond + izhikevich e.m . , 2001 , siam rev . , 43 , 315 + wardell z. , 2002 , mnras , in press
|
in a previous investigation , a model of three - body motion was developed which included the effects of gravitational radiation reaction . the aim was to describe the motion of a relativistic binary pulsar that is perturbed by a third mass and look for resonances between the binary and third mass orbits . numerical integration of an equation of relative motion that approximates the binary gives evidence of such resonances . these resonances are defined for the present purposes by the resonance condition , , where and are relatively prime integers and and are the angular frequencies of the binary orbit and third mass orbit ( around the center - of - mass of the binary ) , respectively . the resonance condition consequently fixes a value for the semimajor axis of the binary orbit for the duration of the resonance because of the kepler relationship . this paper outlines a method of averaging developed by chicone , mashhoon , and retzloff which renders a nonlinear system that undergoes resonance capture into a mathematically amenable form . this method is applied to the present system and one arrives at an analytical solution that describes the average motion during resonance . furthermore , prominent features of the full nonlinear system , such as the frequency of oscillation and antidamping , accord with their analytically derived formulae . + + * key words : * celestial mechanics , relativity , gravitational waves
|
electromagnetic forces produced by intense focused lasers acting on small particles have recently found application in trapping and manipulating small particles in optical tweezers . also , optically - powered rotors have been produced either by scattering of elliptically - polarized light or by using particles of helical shape .these are the counterpart of similar effects observed at the atomic and molecular levels , including molecular quantum rotors and motors . in this work ,we calculate electromagnetic torques acting on elongated particles illuminated by a plane wave .we show that the magnitude of these quantities , as well as the sign of the torque , can be controlled by using the right polarization and wavelength for the external light .we use a multipole formalism to calculate both torques and forces , which can be applied to complex geometries involving more than one particle , as illustrated below for forces acting on metallic spheres in the presence of neighboring particles of different shapes . for practical applications in the context of optical tweezers and optical stretchers , an extension of the present study to include focused beams will be necessary , similar to the one carried out in ref . , where gradient forces are essential to achieve trapping . even in this case, one would expect that control over the orientation of small particles is attainable by choosing the right combination of polarization and wavelength .however , the present work can find some relevance in different situations : ( 1 ) to control the orientation of particles in free space by irradiating them with successive plane - wave pulses of appropriate strength , duration , wavelength , orientation , and polarization , specially if the absolute spatial position is not so relevant ; ( 2 ) to explore the dynamics ( both translational and rotational ) of complex particles in inter - stellar environments ; or ( 3 ) to control the orientation and position of particles trapped against a solid - fluid interface ( the analysis becomes straightforward if the dielectric constant is approximately the same on either side of the interface ) , although friction and other interfacial forces can play a substantial role in this case .the electromagnetic response of each of the particles considered in this work has been expressed in terms of their corresponding multipole -matrix , which relates the coefficients of the multipole expansion of the induced electromagnetic field to those of the external field .these -matrices are in turn obtained by solving maxwell s equations using the boundary element method .we begin by expressing the electromagnetic field acting on a given particle in terms of magnetic and electric multipoles in frequency space as \ii^l j_l(k r ) ] \label{e1 } \\ + & & [ \psi^{m,{\rm sc}}_{lm}\lb -\psi^{e,{\rm sc}}_{lm}\frac{\ii}{k}\nabla\times\lb ] \ii^l h_l^{(+)}(k r)]\ } y_{lm}(\hat{\rb } ) , \nonumber \end{aligned}\ ] ] where is the orbital angular - momentum operator , is the momentum of the light , and the field has been separated in incident ( inc ) and scattered ( sc ) components .assuming linear response , the coefficients of proportionality between them are given by the scattering matrix according to where runs over and components .the matrix is analytical for spherical particles ( mie coefficients ) and we have calculated it numerically using the boundary element method for arbitrarily - shaped objects ( whiskers and torii in the examples offered below ) .the torque acting on the particle in the presence of this field is a quadratic , analytic function of these coefficients that can be obtained from the integral of the maxwell stress tensor over a spherical surface of radius surrounding the object .the time - averaged torque reads .\label{aaa}\end{aligned}\ ] ] then , calculating the magnetic field using faraday s law and inserting the resulting expression together with eq .( [ e1 ] ) into eq .( [ aaa ] ) , one finds \nonumber \\ & \times & [ \psi^{e,{\rm sc}}_{lm } ( \psi^{e,{\rm sc}}_{lm'})^ * + \psi^{m,{\rm sc}}_{lm } ( \psi^{m,{\rm sc}}_{lm'})^ * + \ii \psi^{e,{\rm sc}}_{lm } ( \psi^{e,{\rm inc}}_{lm'})^ * + \ii \psi^{m,{\rm sc}}_{lm } ( \psi^{m,{\rm inc}}_{lm'})^*]\}. \nonumber\end{aligned}\ ] ] eq .( [ bbb ] ) has been used here to obtain the torque acting on metallic and dielectric whiskers illuminated by a light plane wave , as shown in fig .[ fig1 ] .in the case of silver particles , the scattering cross section , which is directly obtained from the scattering amplitude , shows a pronounced resonance when the polarization of the external light is directed along the whisker [ fig .[ fig1](a ) ] .the dielectric function of silver has been taken from optical data .part of the scattered light is absorbed by the metal ( silver in this case ) , so that the total cross section ( solid curve , obtained by using the optical theorem ) is actually larger than the elastic cross section ( broken curve ) .the cross section for polarization perpendicular to the particle is negligible ( by a factor of 40000 at the resonance ) as compared with the case considered in fig .[ fig1](a ) .the torque acting on this particle when it is illuminated by circularly - polarized light follows the same profile as the scattering cross section [ fig .[ fig1](b ) ] .this torque is induced by the angular momentum carried by the external light , part of which is transferred to the particle .a more interesting situation is presented when linearly - polarized light is used [ fig .[ fig1](c ) ] , in which case the particle tends to align itself parallel ( perpendicular ) to the polarization vector when light of wavelength below ( above ) the resonance is employed . here, the torque scales with the sine of the angle between the polarization vector and the particle axis of symmetry .it should be noted that the torque takes non - negligible values well outside the absorption resonance , so that a sizable orientational effect can still be obtained while minimizing heat transfer to the particle ( this arises from absorption of external light ). moreover , control over the particle orientation is possible by using the right combination of polarization and wavelength of the external light .it is interesting to point out that the wavelength of the resonance depends on the length of the whisker , and this offers the possibility of manipulating separately whiskers of different lengths by tuning their respective resonances . for dielectric particles of with same shape [ fig.[fig1](d)-(f ) ] , the value of the torque is one order of magnitude smaller and qualitatively very different as compared to the metallic particles discussed above .for instance , dielectric particles do not exhibit plasmon resonances , unlike metallic ones .moreover , their total and elastic cross sections are identical , since a dielectric particle ( real dielectric function ) can not dissipate energy , so that heating of the particle is avoided .for aggregates formed by several scattering objects , one can still use the multipole expansion of eq .( [ e1 ] ) around each of the objects . the scattered part of the self - consistent field around a given object labeled is the sum of contributions coming from the other objects ( ) plus the scattering of the incident field . both scattering at each object and propagation of the field between objectsare linear operations ( this would be different in non - linear materials ) , so that the self - consistent multipole coefficients satisfy the equation where matrix notation has been used , that is , is actually a vector that contains all , , and components , and is the matrix of coefficients , as defined in eq .( [ ggg ] ) . here, the matrix describes the propagation of the field from object to object , and it can be derived analytically in terms of the coordinates of the multipole origins for the different objects .( [ eq9 ] ) separates the geometrical configuration of the cluster , fully contained in , from the actual shape and composition of the objects , which is entirely buried into .a similar approach can be also followed to treat two - dimensional geometries as well as photonic crystals consisting of periodic configurations of the objects .this multiple scattering formalism has been used to obtain fig.[fig2 ] , which shows the force acting on aluminum spheres of 55 nm in diameter when a nearby particle contributes as well to the scattering of the external field .the electromagnetic force has been calculated from the integral of maxwell s stress tensor , which results in an analytical but complicated expression in terms of the multipole coefficients .a drude dielectric function has been used for aluminum , with a plasma energy of 15 ev and a damping of 1.06 ev .the self - consistent field has been obtained by using a method based upon multiple scattering of multipoles .a marked influence of the neighboring particle is observed on the force acting on the aluminum sphere , and the magnitude and even the sign of this force changes dramatically over the photon energies under consideration when choosing different particle shapes .this suggests the possibility of using neighboring effects to control the relative position of particles under the influence of external light .the polarization of the latter has been chosen to maximize these effects : a strong dipole - dipole interaction is triggered by the component of the electric field directed along the line that separates the centers of the objects , whereas the complementary polarization results in minor neighboring effects that originate in higher multiple contributions .torques and forces acting on small particles under external illumination have been calculated in this work under illumination by a single plane wave .electromagnetic torques acting on whiskers , both metallic and dielectric , have been shown to provide a possible tool for nanoparticle alignment .the magnitude of the torque for attainable light beam intensities is sufficiently large as to overcome other forces such as gravity and brownian motion .finally , the effect of neighboring particles on the electromagnetic force acting on small aluminum spheres has proven to be very large , suggesting a possible way to manipulate the relative orientation of neighboring objects under external illumination .the present study can be easily generalized to account for illumination under focused beams , which will be needed to discuss situations of practical interest in optical tweezers .the author acknowledges help and support from the basque departamento de educacin , universidades e investigacin , the university of the basque country upv / ehu ( contract no .00206.215 - 13639/2001 ) , and the spanish ministerio de ciencia y tecnologa ( contract no . mat2001 - 0946 ) .
|
optical tweezers and optical lattices are making it possible to control small particles by means of electromagnetic forces and torques . in this context , a method is presented in this work to calculate electromagnetic forces and torques for arbitrarily - shaped objects in the presence of other objects illuminated by a plane wave . the method is based upon an expansion of the electromagnetic field in terms of multipoles around each object , which are in turn used to derive forces and torques analytically . the calculation of multipole coefficients are obtained numerically by means of the boundary element method . results are presented for both spherical and non - spherical objects .
|
in it was shown that communication between two nodes within a communication network is possible up to a rate that is equal to the minimum rate flowing through any possible cut between these two nodes the _ mincut _ between them .this rate can be achieved by allowing intermediate nodes to _ code _ , i.e. , to calculate functions of their incoming messages before forwarding them . in was proved that it suffices to apply _ linear network coding _ ( lnc ) , i.e. , intermediate nodes just need to form _ linear _ combinations of their received messages from a finite field .if all operations are performed over a finite field of large enough size , the factors at the intermediate nodes may even be drawn independently at random , which leads to a robust , decentralized , and capacity achieving approach : _ random linear network coding _ ( rlnc ) .this paper studies network coding ( nc ) in _ layered networks _ , where intermediate nodes are arranged in layers and there exist only edges between nodes which are located in adjacent layers .we introduce a _ layering _ procedure for establishing a layered structure in seemingly disparate and unstructured network topologies . applying nc to a layered networkprovides a number of benefits in theory for analysis as well as in practice .moreover , we address the problem of _ bidirectional nc _ and derive a _forward - backward duality_. the paper is organized as follows : sec .[ sec : nc ] gives a brief recapitulation and a classification of nc . in sec .[ sec : layering ] we examine layered networks and introduce the _ layering procedure_. _ bidirectional nc _ is discussed in sec .[ sec : bidirectionalnc ] and some conclusions are drawn in sec . [ sec : conclusion ] .we define a communication network as a directed , acyclic graph with a set of nodes and a set of edges . the considered _ multicast scenario _ consists of a unique source node with outgoing edges , and destination nodes , , with incoming edges .the source transmits symbols to each of the destination nodes by injecting these symbols in parallel ( one on each of its outgoing edges ) into the network and each destination node tries to reconstruct all these symbols from its receive symbols .nodes within the network are connected by edges .each edge represents a noiselesscommunication link on which one symbol from can be transmitted per usage .we further assume that each edge induces the same delay .the in - degree and the out - degree of a node is defined as the number of its incoming and outgoing edges , respectively .coding at intermediate nodes is accomplished as follows : each node collects the symbols from each of its incoming edges .then , it computes possibly different functions of these symbols and transmits them on its outgoing edges .essentially , there exist two distinct approaches to generate outgoing messages at intermediate nodes . in the first one , which we denote as nc variant , each intermediate node calculates only a single function of its input symbols and transmits the resulting output symbol on all outgoing edges .this variant is applicable , e.g. , in wireless networks , where intermediate nodes possess omnidirectional antennas , and thus , transmit a single signal . in nc variant intermediate nodes compute _ individual _ output symbols for their outgoing edges .this variant can be applied , e.g. , in wired networks .[ c][c][1][0] [ c][c][1][0] [ c][c][1][0] [ c][c][1][0] [ c][c][1][90] [ cc][bl][.8][0] [ cc][bl][.8][0] [ cc][bl][.8][0] [ c][c][1][0](a ) [ c][c][1][0](b ) : conversion of a node which applies nc variant ( a ) into single output nodes ( b).,title="fig : " ] in fig .[ fig : var1var2conv](a ) an intermediate node with incoming and outgoing edges is depicted .the incoming and the outgoing symbols of node are denoted as , , and , , respectively .the two nc variants are closely related to each other .this is specified in the following theorem and is illustrated in fig .[ fig : var1var2conv ] .[ theo : var12 ] a communication network employing nc variant can be transformed into an equivalent network which applies nc variant , by splitting up each intermediate node with outgoing edges into single output auxiliary nodes .these auxiliary nodes possess the same input edges as the original node .a variant- node is split up into auxiliary single output nodes , , cf .[ fig : var1var2conv](b ) . by repeating this procedure for all variant- nodes results in an equivalent nc variant network .hybrid forms of these two variants are also possible , if a node transmits distinct messages .such a variant is possible , e.g. , in wireless networks , where intermediate nodes possess several directional antennas and transmit distinct messages in distinct directions .these hybrid variants can also be transformed into nc variant by splitting up nodes which transmit different messages into auxiliary nodes .obviously , the mincut of a network can only be achieved by applying nc variant .however , for analysis the equivalent nc variant representation is more convenient , as will be shown in the remainder of this paper . in lnc the outgoing messages at a node are -linear combinations of their incoming messages where are the _ linear coding coefficients _ at node .if nc variant is applied , all outgoing symbols are equal , i.e. , , and thus , , , whereas in nc variant these quantities are different .let and be the vectors of incoming and outgoing symbols at node , respectively. we can write ( [ eq : linnc ] ) in vector - matrix notation as where is the _ coefficient matrix _ of node in nc variant the columns of are restricted to one element ( , ) , whereas in nc variant the columns consist of individual entries . since each intermediate node performs linear coding ,the resulting receive vector is still a linear transformation of the source vector , i.e. , the network between source and destination acts as a linear map which is represented by the _ individual network channel matrix _ .the elements of this matrix represent the corresponding _ route gains _, i.e. , is the gain of the route from the outgoing edge of the source node to the incoming edge of destination node .these route gains are sums of products of the coding coefficients . the end - to - end model for a linkis given by is able to reconstruct if has full column rank .we speak of a _valid nc _ in this case .in a layered network all intermediate nodes are arranged in layers .nodes in layer only receive packets from nodes in layer , i.e. , there are no connections between non - adjacent layers and no connections between nodes within the same layer . in fig .[ fig : layerednw ] a layered network with one source node and destination nodes , , is depicted .[ cc][bl][1][0] [ cc][bl][1][0] [ cc][bl][1][0] [ cc][bl][1][0] [ bc][bl][1][0] [ cl][tl][1][0] [ cr][tl][1][0] [ bl][bl][1][0] [ cc][cc][1][0] [ c][c][1][0] [ cc][cc][1][0] [ cc][cc][1][0] [ cl][bl][1][0] [ cl][bl][1][0] [ cl][bl][1][0] [ cl][bl][1][0] [ cl][bl][1][0] [ cl][bl][1][0] [ cl][bl][1][0] [ cl][bl][1][0] [ cl][bl][1][0] [ cl][bl][1][0] [ cl][bl][1][0] exemplary layered network with layers , one source node , and destination nodes ( multicast scenario).,title="fig : " ] the number of nodes in layer is denoted as , with and . for the unicast scenario ,i.e. , if there is only one destination node , holds .such networks exhibit a number of beneficial properties of which two are particularly noteworthy . 1 .a layered network is inherently time synchronized .all symbols arrive simultaneously at a specific intermediate node .consequently , each intermediate node can immediately code its incoming symbols and does not have to wait until all required symbols arrive .it enables a factorization of the individual network channel matrices ( cf .[ sec : lncinlaynets ] ) .this is the basis for the derivation of the _ forward - backward duality _ for lnc ( cf .[ sec : bidirectionalnc ] ) . when _ linear _ nc variant is applied , ) , and the factorization of the channel matrix has to be accomplished for the equivalent network .] the _ overall network channel matrix _ , i.e. , the linear transformation from layer 1 to layer , can be obtained as the product of all _ interlayer matrices _ these interlayer matrices consist of the linear factors associated with the edges that connect the corresponding layers .the element in the row and the column of represents the linear factor corresponding to the edge which connects the node in layer with the node in layer .the connection between the interlayer matrices and the coefficient matrices is as follows . contains the coding coefficients of the coefficient matrices which correspond to the intermediate nodes in layer .in addition to that , the interlayer matrices imply the wiring between the two affected layers , whereas the coefficient matrices merely describe the operations at one specific node .to sum up , is an _ edge - oriented _ description of the lnc , which takes also the topology into account , and is a local , _ node - oriented _ description .the _ individual _ network channel matrix corresponding to destination node , , consists of a subset of rows , represents a matrix composed of a subset of the rows and a subset of the columns of .] of where is the subset of rows , which correspond to the nodes in the last layer , to which the destination node is connected . in case of the unicast scenario ,the individual network channel matrix is equal to the overall network channel matrix .the factorization ( [ eq : aprod ] ) enables a simple method to determine an upper bound on the mincut between the source and a destination : [ theo : mincutlinnc ] the mincut between the source and a destination node in a layered network is the mincut between and is the number of symbols which can be reliably transmitted from to , and thus , is equal to the rank of the individual network channel matrix .since the individual network channel matrix is the product of the corresponding inter - layer matrices , the minimal rank of the inter - layer matrices is an upper bound on the mincut between and .the finite field size has to be greater than the number of destinations . in a non - layered network paths from the source node to the destination nodesconsist of different numbers of edges i.e. , have different `` lengths '' .an exemplary non - layered network is depicted in fig .[ fig : nonlayerednw](a ) .[ cc][bl][1][0] [ cc][bl][1][0] [ cc][bl][1][0] [ cc][bl][1][0] [ cc][bl][1][0] [ cc][bl][1][0] [ cc][bl][1][0] [ cc][bl][1][0] [ cc][bl][1][0] [ cc][bl][1][0] [ cc][bl][1][0](a ) [ cc][bl][1][0](b ) non - layered network with one source , two destinations , and five intermediate nodes . without ( a ) , and with depicted delay elements ( b).,title="fig : " ] obviously , there are paths from to of different lengths , e.g. , and consisting of three and five edges , respectively .the aim of our proposed procedure , which we denote as _ layering _ , is to force all paths from the source to all of the destinations to have the same length , namely .for that , consider the _ coding points _, i.e. , the nodes which receive more than one symbol . the first coding point in our exemplary network in fig .[ fig : nonlayerednw](a ) is , which receives a packet from after one time unit , and a packet from after two time units . to be able to code , i.e. , to create a function of these two packets, has to buffer the packet received from for one time unit .this buffer , which actually is part of , can formally be redrawn outside of .we continue this step for all coding points in and obtain the network depicted in fig .[ fig : nonlayerednw](b ) .a delay of time units is denoted as .finally , we interpret these delay elements as single - input / single - output ( siso ) nodes , which just pass the packet received on their incoming edge to their outgoing edge .delays of time units are interpreted as consecutive siso nodes . basically , layering consists of two steps : 1 .enumerate all intermediate network nodes according to an ancestral ordering , i.e. , if then .2 . visit all coding points sequentially and introduce siso nodes , such that all paths which meet in one point have the same length .[ cc][bl][1][0] [ cc][bl][1][0] [ cc][bl][1][0] [ cc][bl][1][0] [ cc][bl][1][0] [ cc][bl][1][0] [ cc][bl][1][0] [ cc][bl][1][0] [ cc][bl][1][0] [ cc][bl][1][0] [ cc][bl][1][0] [ cc][bl][1][0] [ cc][cc][1][0] [ cc][cc][1][0] [ cc][cc][1][0] communication network from fig .[ fig : nonlayerednw ] in layered representation.,title="fig : " ] after redrawing the network , we obtain the layered structure depicted in fig .[ fig : layering ] , where the introduced siso nodes are depicted in gray .this layered network with layers is equivalent to the network depicted in fig .[ fig : nonlayerednw](a ) .since each coding point has to be visited exactly once , the complexity of this algorithm is of order , where is the number of coding points and is the average number of incoming edges of the coding points .we summarize this insight in the following theorem .[ theo : layering ] despite the actual structure of an acyclic network , an equivalent layered network can be obtained by introducing additional redundant siso nodes , such that all paths from the source to any destination consist of the same number of edges .the factorization ( [ eq : aprod ] ) of can be accomplished together with the layering procedure : during the layering procedure the nodes are assigned to layers and the wiring between the layers can be obtained from the set of edges .we speak of a _ layered variant representation _ of an arbitrary network if it was layered according to theorem [ theo : layering ] and transformed to variant according to theorem [ theo : var12 ] . in already exploited the layered variant representation of communication networks in the context of rlnc . with the aid of the factorized version of the network channel matrix ( [ eq : aprod ] ) we derived in the probability distribution of the entries of and an upper bound on the outage probability of random linear network codes with known incidence matrices .a further consequence of the layered variant representation is a new possibility of the determination of an upper bound on the mincut of acyclic networks in two steps : 1 .layering of the network and a variant to variant conversion if necessary .2 . determination of the mincut according to theorem [ theo : mincutlinnc ] .up to now , we have considered a unidirectional communication from the source node to one or several destination nodes . in this section ,we address the problem of a bidirectional communication between a source - destination pair , i.e. , the case where a destination node replies to the source node , which is of interest , e.g. , in optical ( fiber - optical ) networks . for the moment, we assume that , i.e. , that the individual network channel matrix is square . furthermore ,for notational convenience , we drop the index and denote the considered individual network channel matrix as .when we reverse the direction of communication , it is reasonable to reverse the operations at the intermediate nodes , as depicted in fig .[ fig : revert ] for the case of a node with two incoming and two outgoing edges . in the backward direction , not only the direction of communicationis reversed , also the summing and the distribution points are interchanged .[ cc][bl][1][0] [ cc][bl][1][0] [ cc][bl][1][0] [ cc][bl][1][0] [ c][c][1][0] [ c][c][1][0] [ c][c][1][0] [ c][c][1][0] [ c][c][1][0](a ) [ c][c][1][0](b ) [ c][c][1][0] [ c][c][1][0] [ c][c][1][0] [ c][c][1][0] the input - output relation of this exemplary node by means of the coefficient matrices ( [ eq : codingmatrix ] ) for the _ forward direction _ is whereas for the _ backward direction _ we obtain i.e. , if we retain the coding coefficients and reverse the operations at an intermediate node , the coefficient matrix for the backward direction is the transpose of the coefficient matrix for the forward direction the consequence for the individual network channel matrix is stated in the following theorem .[ theo : fbduality ] the individual network channel matrix for the backward direction in networks which apply lnc is equal to the transpose of the network channel matrix for the forward direction given that the coding coefficients are retained , and the operations at the intermediate nodes are reversed . consider a layered variant representation of an arbitrary network which applies lnc .we first investigate the effects of the reversion of the communication direction on the inter - layer matrices .for that , consider the two adjacent layers depicted in fig .[ fig:2layfb](a ) .[ cc][bl][1][0] [ cc][bl][1][0] [ cc][bl][1][0] [ cc][bl][1][0] [ cc][bl][1][0] [ cc][bl][1][0] [ cc][bl][1][0] [ c][c][.9][0] [ c][c][.9][0] [ c][c][.9][0] [ c][c][.9][0] [ c][c][.9][0] [ c][c][.9][0] [ c][c][.9][0] [ c][c][.9][0] [ c][c][1][0] [ c][c][1][0] [ c][c][1][0] [ c][c][1][0] [ c][c][1][0](a ) [ c][c][1][0](b ) the inter - layer matrix for the forward direction results in if we reverse the processing at the nodes as described above and retain the coding coefficients , the coefficient which corresponded to edge , now corresponds to the reversed edge , cf .[ fig:2layfb](b ) . due to the fact that the roles of the layers are interchanged ( i.e. , the `` transmitting '' layer is now the `` receiving '' layer and vice versa ) the inter - layer matrix for the backward direction the transposed version of the one for the forward direction inserting this into ( [ eq : aprod ] ) yields theorem [ theo : fbduality ] can be seen as an analogon to the famous _ uplink - downlink duality _ from mimo communications , e.g. , , which states that the channel matrix for the uplink is equal to the hermitian transpose of the channel matrix for the downlink , i.e. , ( in the complex baseband ) . if , the `` reverse source '' node has more outgoing edges than the `` reverse destination '' node incoming ones . as a consequence ,the `` reverse source '' can not simply transmit individual transmit symbols .rather , we have to force to be square , by selecting linearly independent rows and deleting the remaining ones . in the graph corresponds to deleting the corresponding incoming edges of .another possibility to resolve the problem of having too many outgoing edges at the `` reverse source '' is the application of _ precoding _ , which is denoted as _ coding at the source _ in the context of nc .then , the `` reverse source '' transmits individual transmit symbols and linear combinations of them .the consequence of theorem [ theo : fbduality ] on the validity of the linear network code is as follows . if a linear network code for the forward direction is valid , then it is also valid for the backward direction .a linear network code is valid , if the network channel matrix has a rank equal to .if this is given , then the rank of the network channel matrix for the backward direction is also equal to thus , in a bidirectional nc scenario it is sufficient to design a linear network code for one direction , e.g. , with the aid of the _ linear information flow ( lif ) algorithm _ .this code can then be used also for the backward direction if the operations at the intermediate nodes are reversed according to fig .[ fig : revert ] .in this work , we have classified nc variants , and have shown that all variants can be traced back to the most basic one nc variant .we have studied layered networks , and the application of lnc to such networks . moreover ,a technique called layering has been proposed , which allows us to introduce a layered structure into arbitrary , non - layered networks . with the aid of the layered variant representation of communication networkswe were able to state an algebraic expression of the mincut , and to derive the forward - backward duality for lnc , which can be seen as an analogon to the famous uplink - downlink duality for mimo channels .furthermore , we already exploited the advantages of the layered variant representation of a communication network in the context of random linear network coding in .1 r. ahlswede , n. cai , s .- y .li , r.w .`` network information flow , '' _ ieee tr .inf . theory _ , pp . 12041216 , july 2000 .li , r. w. yeung , n. cai . `` linear network coding , '' _ ieee tr .inf . theory _ , pp . 371381 ,t. ho , m. mdard , r. koetter , d.r .karger , m. effros , j. shi , b. leong .`` a random linear network coding approach to multicast , '' _ ieee tr .inf . theory _ , pp . 44134430 ,d. silva , f.r .kschischang , r. ktter .`` communication over finite - field matrix channels , '' _ ieee tr .inf . theory _ , pp . 12961305 ,r. ktter , m. mdard .an algebraic approach to network coding ._ ieee/ acm tr . networking _ , octb. schotsch , m. cyran , j.b .huber , r.f.h .fischer , p. vary .an upper bound on the outage probability of random linear network codes with known incidence matrices . in proc . _ 10 .itg conf . on systems , communications , andcoding ( scc ) _ , feb .m. schubert , h. boche . a unifying theory for uplink and downlink multi - user beamforming . in proc ._ ieee int .zrich seminar ( izs 2002 ) _ , pp . 27/127/6 , feb .2002 . c. fragouli , e. soljanin .network coding fundamentals . _ foundations and trends in networking _ , pp . 1133 , 2007 ._ precoding and signal shaping for digital transmission_. new york : wiley , 2002 .s. jaggy , p. sanders , p.a .chou , m. effros , s. egner , k. jain , l.m.g.m .polynomial time algorithms for multicast network code construction ._ ieee tr .inf . theory _ , pp .19731982 , june 2005 .
|
in layered communication networks there are only connections between intermediate nodes in adjacent layers . applying network coding to such networks provides a number of benefits in theory as well as in practice . we propose a _ layering procedure _ to transform an arbitrary network into a layered structure . furthermore , we derive a _ forward - backward duality _ for linear network codes , which can be seen as an analogon to the _ uplink - downlink duality _ in mimo communication systems .
|
displaying 3d content is not only an important issue in the entertainment industry , it is also of increasing importance in science where new numeric and experimental methods have created a wealth of three - dimensional datasets .many stereoscopic display system are based on polarization filtering : the visual information for each eye is oppositely polarized , projected to and scattered or transmitted by the screen , and finally filtered by the viewer s glasses which consist of two polarizers admitting only the correctly polarized light to each eye .there are two options for polarization filtering : linear and circular polarized light .while linear polarizers are simpler to manufacture , circular polarization has the advantage that head tilting will not impair the quality of the image an ideal screen would completely preserve the polarization of the incoming light .however , in practice there is always some amount of ghosting resulting from the change of polarization at the screen .a measure for ghosting is the system crosstalk .it is defined as the ratio between the intensity of light that leaks from the unintended channel to the intended one and the intensity of the intended channel . according to measurements of huang _et al_. , the maximal acceptable system crosstalk for a typical viewer to still experience a stereo sensation is 0.1 .( lower values down to can still be detected by careful visual inspection ) .while it is also known on a theoretical basis that the viewing angle will influence the amount of system crosstalk , to our knowledge no measurements of the angle - dependent system crosstalk of different screen types have been published up to now . neither has the question been studied how the inclination angle ( between the incoming light from the projector and the surface normal of the screen ) influences the system crosstalk .a second measure for the quality of a screen is the brightness of the image , which depends on the amount and angular distribution of the reflectance ( for silver screens ) or transparency ( for rear - projection screens ) of the screen . for silver screens thisis typically quoted as the screen gain , the intensity measured at normal incidence normalized by the intensity of a lambertian source . herewe measure the angle dependent scattering rate for both silver screens and rear - projection screens . is defined as the ratio of the intensity received by a viewer in a certain angle to the intensity of the incoming light , normalized by the solid angle . in this paperwe present measurements of the angular dependence of system crosstalk and scattering rate for three samples of silver screens ( labeled ss1 to ss3 ) and three rear - projection screens ( rp1 to rp3 ) .additionally , we determine the surface texture of the samples using white - light interferometry ; this information provides some qualitative insight into our optical results .[ cols="^,^,^ " , ] figure [ fig : sketch ] shows the experimental setup used for measuring the angular dependence of the system crosstalk and the scattering rate .diode pumped solid state lasers ( dpgl-2050 from photop and verdi v5 sf from coherent ) with a wavelength of 532 nm were used as light sources for the experiments . passing a beam expander ,the diameter of the laser beam was increased to 3.4 mm ( fwhm ) , whereas the typical size of structural inhomogeneities on the screen surface is at most a few hundred micrometers as shown below .this ensured that the measured data for different spots on the screen are reproducible within % .the laser light was linearly polarized by passing a polarizer or circularly polarized by passing an additional babinet - soleil compensator ( from b. halle ) .the screen sample is irradiated by the laser at normal incidence and the scattered laser light of the silver and rear projection screens is detected by an detection unit in reflection ( [ fig : sketch]b ) and in transmission ( [ fig : sketch]c ) , respectively .the detection unit consists of a power meter ( pm100d with sensor s130c from thorlabs ) , an analyzer and in case of the circular polarization an additional quarter - wave plate ( both from b. halle ) .a long and narrow tube was placed in front of the power meter , ensuring that only the photons are detected that scatter from the irradiated spot on the screen along the viewing axis of the sensor in a solid angle of 2 .the detection unit was placed on a rotatable rail with the rotational axis being fixed in such a way that the normal viewing axis of the sensor intercepts always with the illuminated area on the screen during rotation .the viewing angle can be varied from -20 to 80 . for silver screensthe range of is inaccessible in order to not block the incoming beam . in both cases of linear and circular polarizationthe incoming laser intensity was measured just in front of the sample .furthermore , the intensity of the scattered light was measured for the intended channel with the polarization being the same direction as the incoming one ( , analyzer and polarizer parallel ) and for the unintended channel with the polarization being the opposite direction ( , analyzer and polarizer perpendicular ) for different viewing angles . from this dataone can compute the crosstalk : and the scattering rate : the precision of the measurement for the crosstalk depends strongly on the purity of the initial laser polarization , whereas the scattering rate is not affected within our measurement precision . analyzing the crosstalk without any screen sample ( i.e. putting the laser directly in front of the analyzer system ) we found the lower resolution limit in the linear case to be less than . in the circular casethe degree of polarization results in a lower resolution limit of .the screen samples were obtained from the company screenlab ( elmshorn , germany ) , their specifications and brand names are listed in table [ tab : roughness ] .l| c c c| c c sample & brand name & gain & transmission & $ ] & + rp1 & bs xrp3 & & 41.8 & 3.8 & 1.03 + rp2 & ws xrp3 & & 88.8 & 3 & 1.13 + rp3 & bs rp2 & & 41.2 & 4.2 & 1.2 + ss1 & sh120 & 2.4 & & 5 & 2.3 + ss2 & sf120 & 2.4 & & 8 & 2.6 + ss3 & wa160 & 1.3 & & 22 & 2.2 + the surface topography of the screens was measured using a zemapper whitelight vertical scanning interferometer ( zemetrics , tucson , usa ) : the focal plane of an interference pattern is vertically scanned through the sample topography , then a height map is calculated from the collected amplitude maps of the interference patterns .the vertical resolution of the instrument is better than 1 nm ; the maximum field of view applied in this study is 1.4 mm . for more information on the instrumentsee .prior to the measurement the rear projection screens where sputter coated with a 40 nm gold layer to increase surface reflectivity .figure [ fig : crstk ] displays the system crosstalk of the six screen samples , both with circular and linear polarized light . based on the criterion found by huang _ , all screens allow stereo vision for viewing angles smaller than 40 ( circular ) or 48 ( linear ) . in practicethis range will be smaller due to the additional crosstalk originating from the glasses and the inclination angle of the incoming light ; the latter effect will be described below . in general ,the screens seem to fall into two categories ; they are either optimized for a large range of acceptable crosstalk or a minimized crosstalk at small . in both categoriesthe silver screens are outperformed by the rear projection screens : rp3 has a smaller at small than ss1 while rp1 has a broader range of acceptable viewing angles than ss3 .regarding the polarization mode , linear polarization has for each screen a clear advantage over circular . at = 10 varies between 1.1 ( rp1 ) and 4 ( rp3 ) as shown in figure [ fig : c_ratio ]. please observe , that our measurements of the crosstalk of rp3 at small angles might be limited by our experimental resolution . .measured on screen ss1 using linear polarized light . , scaledwidth=120.0% ] .measured on screen ss1 using linear polarized light ., scaledwidth=120.0% ] under real world conditions it is quite likely that the incoming light itself will have an inclination angle to the surface normal of the screen . to quantify the additional crosstalk created this way , we modified the experimental setup by adding a periscope in front of the polarizer . figure [ fig : crstk_incl ] shows the crosstalk for sample ss1 in the case of linear polarization for inclination angles between 0and 15 . while there is a clear increase of crosstalk with , the range of acceptable viewing angles is reduced by only 5 .the scattering rates of the screen samples with circular polarization are shown in figure [ fig : sr ] .deviations of measured with linear or the circular polarization are within our errorbars . for high luminosities at small viewing anglesss1 and rp2 are the best choice , in terms of best homogeneity ss3 comes closest to a lambertian source . from a theoretical sidethe performance of a screen will depend both on its material and its surface texture .while we do not have information on the electromagnetic properties of the screen material , the surface texture can be measured with white light interferometry .perspective images of the surfaces of ss1 , ss3 , rp1 , rp3 are shown in figure [ fig : roughness ] .the rms ( root mean square ) roughness and the ratio between the surface area and the projected area of all six screen samples is listed in table [ tab : roughness ] . a comparison of the angular dependence of with these values hints at as a predictor for the deviation from a lambertian source .this is particularly shown by ss3 which has by far the highest value of and the smallest dependence of . regarding the system crosstalka similar correlation between and the slope of at large angles might exist .on the other side we do not find a clear correlation between the optical properties and .all screens allow effective stereo projection for viewing angles up to 40 . at larger anglesthe crosstalk of rear projection screens is considerably smaller than that of silver screens .also for each screen the crosstalk was larger with circular polarization than with linear .however , when planing a display system additional factors have to be taken into account like the available space behind the screen or the sensitivity of the system against the viewers tilting their heads .consequentially , no optimal solution for all possible scenarios exist .while the roughness of the screens influences their large viewing angle behavior , clearly more research is needed for a quantitative understanding .
|
the screen is a key part of stereoscopic display systems using polarization to separate the different channels for each eye . the system crosstalk , characterizing the imperfection of the screen in terms of preserving the polarization of the incoming signal , and the scattering rate , characterizing the ability of the screen to deliver the incoming light to the viewers , determine the image quality of the system . both values will depend on the viewing angle . in this work we measure the performance of three silver screens and three rear - projection screens . additionally , we measure the surface texture of the screens using white - light interferometry . while part of our optical results can be explained by the surface roughness , more work is needed to understand the optical properties of the screens from a microscopic model .
|
in this paper an alternative approach to the staff selection competition in the case of two departments considered by baston and garnaev is proposed .the formulation of the problem in baston and garnaev is as follows .two departments in an organisation are each seeking to make an appointment within the same area of expertise . the heads of the two departments together interview the applicants in turn and make their decisions on one applicant before interviewing any others . if a candidate is rejected by both departmental heads , the candidate can not be considered for either post at a later date .when both heads decide to make an offer , they consider the following possibilities . 1. [ casea ] the departments are equally attractive , so that an applicant has no preference between them ; 2 .[ caseb ] one department can offer better prospects to applicants , who will always choose that department .the departmental heads know that there are precisely applicants and that each applicant has a level of expertise which is random .it is assumed that the interview process enables the directors to observe these levels of expertise , which form a sequence of _i.i.d _ random variables from a continuous distribution .if no appointment is made to a department from these applicants , then the department will suffer from a shortfall of expertise .game [ caseb ] has one nash equilibrium , which can be used as the solution to the problem .game [ casea ] has many nash equilibria .this raises the question of equilibrium selection .baston and garnaev interpreted such a variety of nash equilibria solutions as a way of modelling different dynamics within the organisation , which can result in various outcomes during the conscription process .if one departmental head is aggressive and one passive , we might expect a different outcome to the one in which both are of a similar temperament . when both have a similar temperament oneexpects a symmetric strategy and value , but when they have different temperaments one should expect an asymmetric equilibrium and value .the different character of heads is modelled by the notion of a stackleberg leader .also , the difference in the level of complication of equilibria might also be an argument justifying this approach to equilibrium selection .it is shown that these non - symmetric equilibria have the advantage that the players use pure strategies , whereas at the symmetric equilibrium , the players are called upon to employ specific actions with complicated probabilities . the staff selection problem presented aboveis closely related to the best choice problem ( bcp ) .there are some potential real applications of decision theory which strengthen the motivation of the bcp ( the one decision maker problem ) .one group of such problems are models of many important business decisions , such as choosing a venture partner , adopting technological innovation , or hiring an employee using a sequential decision framework ( see stein , seale and rapoport , chun ) .others are an experimental investigations of the , , secretary problem , which compare the optimal policy from the mathematical model with behaviour of human beings ( see seal and rapoport ) .we have not found any such investigation for bcp games .it could be that the theoretical results are not complete enough to start applied and experimental research . in spite of the long history of bcp and its generalisations presented in review papers by freeman , ferguson , rose , samuels , there are also competitive versions , on which researchers attention has been focused ( see sakaguchi for review papers ) .let us briefly recall the main game theoretic models of bcp .enns and ferenstein , enns , ferenstein and sheahan solved a non - zero sum game related to bcp .some important mathematical results related to the problem , posed in this paper , were proven many years later by bruss and louchard .the full information version of the game was solved by chen , rosenberg and shepp .the relation between players is as follows .the players have numbers : and .when an item appears then player always has the first opportunity to decide whether to hire the applicant or not ( unless she has hired one already ) .one can say that player has priority .if player does not hire the current applicant , then player can decide whether to hire the applicant or not ( unless she has hired one already ) .if neither player hires the current applicant , they interview the next applicant .the interview process continues until both players have hired an applicant .a hired applicant does not hesitate and accepts an offer without any delay or additional conditions .the games in this group of papers have the same strategic scheme as in game [ caseb ] .the concept of equal priority of the players in the selection process in a model of a non - zero - sum game related to bcp was introduced by fushimi .szajowski extended this model to permit random priority .ramsey and szajowski considered a mathematical model of competitive selection with random priority and random acceptance of the offer ( uncertain employment ) by candidates .uncertain employment is a source of additional problems , which are solved as follows . at each moment the candidate is presented to both players .if neither player has yet obtained an object then : ( i ) : : if only one of them would like to accept the state , then he tries to take it . in this case the random mechanism assigns the availability of the state ( which can depend on the player and the moment of decision ) ; ( ii ) : : if both of them are interested in this state , then the random device chooses the player who will first solicit the state .the availability of the state is the same as in the situation when only one player wants to take it .if the chosen player obtains the state , he stops searching ; ( iii ) : : if this state is not available to the player chosen by the random device , then the observed state at moment is lost to both players .both players continue searching by inspecting the next state . when one player has obtained a candidate the other player continues searching alone .if this player wishes to accept a candidate , the probability that it is available to him is the same as in point ( i ) above .when a non - zero - sum game does not have a unique nash equilibrium , then communication between the players would be useful in deciding which equilibrium should be played .using the idea of correlated strategies introduced by , the set of possible strategies is extended to the set of correlated stopping times and the actions undertaken by the players are correlated .little research has been carried out on the role of communication between players in stopping games . and consider correlated equilibria in general dynamic games .the form of correlation is not unique .the approach applied here is based on a generalisation of randomised stopping times .various additional criteria used by the players to correlate their actions restrict the set of possible solutions .these criteria are based on those used in , which resemble ideas of solutions of cooperative games presented in .strategies of staff selection based on the construction of correlated strategies according to various selection criteria are presented in the setting adopted by baston and garnaev .correlated strategy selection was proposed by the authors in .the construction of correlated equilibria in stopping games is based on the concept of correlated equilibria in two - by - two bimatrix games .the geometry of correlated equilibria in bimatrix games is described by calv - armengol . introduced a correlation scheme in randomised strategies for non - zero - sum games extending the concept of nash equilibrium . using this approach some process of preplay communicationis needed to realise such a strategy .aumann s approach has been extended in various manners ( eg see ) . the process of adapting correlated equilibria to stopping games starts from the idea of correlated stopping times .[ corrstrat1 ] a random sequence such that , for each , ( i ) : : is adapted to for ; ( ii ) : : a.s .is called a correlated stopping strategy .the set of all such sequences will be denoted by .let be a sequence of i.i.d .r.v . with uniform distribution on ] , independent of and independent of the markov process . denote and .the expected payoffs are defined as and , respectively .[ defce ] a correlated stopping strategy is called a correlated equilibrium point of , if for every , and .this is a definition of a correlated equilibrium in the normal form of the game .it should be noted that a stronger notion of correlated equilibrium can be introduced by requiring that the correlation must define an equilibrium in each restricted game where steps remain .since the set of nash equilibria is a subset of the set of correlated equilibria , it is clear that whenever the problem of the selection of a nash equilibria exists , the problem of the selection of a correlated equilibrium also exists .however , the notion of correlated equilibrium assumes that communication takes place .such communication can be used to define the criteria used by players to select a correlated equilibrium .we now formulate various criteria for selecting a correlated equilibria .these criteria select subsets of .the concepts which are used here do not come from the concepts of solution to nash s problem of cooperative bargaining .these concepts were used by greenwald and hall for computer learning of equilibria in markov games .[ corrselect1 ] let us formulate four different selection criteria for correlated equilibria in a stopping game . 1. [ util ] a utilitarian correlated equilibrium is an equilibrium constructed recursively in such a way that at each stage the sum of the values of the game to the players is maximised given the equilibrium calculated for stages is played.. 2 . [ egal ]an egalitarian correlated equilibrium is an equilibrium constructed recursively in such a way that at each stage the minimum value is maximised given the equilibrium calculated for stages is played .[ repub ] a republican correlated equilibrium is an equilibrium constructed recursively in such a way that at each stage the maximum value is maximised given the equilibrium calculated for stages is played .[ libert ] a libertarian correlated equilibrium is an equilibrium constructed recursively in such a way that at each stage the value of the game to player is maximised given the equilibrium calculated for stages is played .[ thcorreq1a ] the set of correlated equilibrium points satisfying any one of the given criteria above is not empty .let us assume that the cost of not selecting an applicant is .this is the cost of a shortfall of expertise in a department . if a director selects an applicant with expertise , the department gains .let us assume that the candidates have _ i.i.d ._ expertise with uniform distribution on ] there are two asymmetric pure nash equilibria and one symmetric nash equilibrium in mixed strategies . without extra assumptionsit is not clear which equilibrium should be played .baston and garnaev have proposed that if the players have a similar character , then the symmetric solution should be played . in the non - symmetric case the idea of stackleberg equilibrium can be adopted .it is assumed that the first player will be the stackleberg leader and the -stackleberg equilibrium is the solution of the problem selected .we will use an extensive communication device to construct correlated equilibria . in general , correlated equilibria are not unique .usually the set of correlated equilibria contain the convex hull of nash equilibria .however , natural selection criteria can be proposed and the possibility of preplay communication and use of an arbitrator solve the problem of solution selection .the players just specify the criterion .such criteria are formulated in section [ corrselect ] .the set of solutions which fulfil one of the points [ util]-[libert ] in definition [ corrselect1 ] are not empty . for , when ] is of the form . the value of the two - stage game to the players at vertex is the values at these three vertices are such that .( f ) : : this correlated equilibrium is of the form : and the expected gain of the players at correlated equilibrium given the expertise of candidate ] .the value of the two - stage game to the players at vertex is ( g ) : : this correlated equilibrium ( the nash equilibrium in mixed strategies ) is of the form : the expected gain of the players at correlated equilibrium given the expertise of the candidate ] .the value of the two - stage game to the players at vertex is } { ( 2d - x-\frac{1}{2})^2 } d\!x\\ \nonumber & < & v_1^{(e)}.\end{aligned}\ ] ]let us apply the selection criteria on the set of correlated equilibria of the two stage game .we thus define a linear programming problem , in which the objective function is defined by the criterion and the feasible set is the set of vectors defining a correlated equilibrium . hence to find a solution, we compare the appropriate values at each vertex of the correlated equilibria polytope described in the previous section .it should be noted that when either the republican or egalitarian criterion is used , the solution is given by the appropriate solution from one of two linear programming problems . in these casesthe two linear programming problems are : 1 .maximise given the equilibrium constraints and the constraint when the egalitarian condition is used or when the republican condition is used .2 . maximise given the equilibrium constraints and the constraint when the egalitarian condition is used or when the republican condition is used . from the symmetry of the game the hyperplane splits the set of correlated equilibria into the two feasible sets for these problems and becomes a vertex of the feasible set in each of the problems .we call this vertex .this vertex replaces vertex or vertex depending on the additional constraint .we have from ( [ v1c])([v1 g ] ) it follows that the maximal game value for the first player is guaranteed at vertex and for the second player at .it means that is the libertarian and is the libertarian correlated equilibrium . in relation to the solutions presented by baston and garnaev , the libertarian equilibrium corresponds to the stackleberg solution at which player takes the role of the stackleberg leader .let us denote .we are looking for such that .for we have , and . for minimal values are and .moreover , .therefore and define egalitarian equilibria and .it follows that any linear combination of these equilibria , where ] , the egalitarian criterion is satisfied at vertices and .it follows that and any linear combination of and defines an egalitarian equilibrium .since it follows by induction that an egalitarian equilibrium is of the required form .in particular , the equilibrium obtained by deciding who plays the role of stackleberg leader based on the result of a coin toss defines an egalitarian equilibrium .suppose libertarian 1 is taken to be the republican equilibrium for the last 2 stages .for the calculations are similar to the calculations made for the libertarian 1 equilibrium .it can be shown that the libertarian 1 equilibrium again maximises the maximum value . using an iterative argument, it can be shown that the libertarian 1 equilibrium is a republican equilibrium . by the symmetry of the gameit follows that the libertarian 2 equilibrium is also a republican equilibrium .unfortunately , the value function of a utilitarian equilibrium for is not uniquely defined . in order to find a `` globally optimal '' utilitarian equilibrium , we can not use simple recursion . from the form of the payoff matrixit can be seen that when the maximum sum of payoffs is .this is obtained when at least one of the players accepts the candidate .such a payoff is attainable at a correlated equilibrium , since and are correlated equilibrium .it follows from the definition of a utilitarian equilibrium that when .[ thmutil ] the libertarian equilibria are the only globally optimal utilitarian equilibria for ( ignoring strategies whose actions differ from those defined by one of these strategies on a set with probability measure zero ) .* proof * first we show that among the set of utilitarian equilibria the minimum value is minimised at the libertarian equilibria for .considering the values of the game at the vertices of the set of utilitarian correlated equilibria when ( obtained by adding the additional condition that for ) , the minimum value is minimised at the two libertarian equilibria . from the form of the two linear programming problems that define this minimisation problem , it follows that these solutions are the only such solutions . by symmetry .set .assume that for all .we have where is the expected reward of such a player given that ] and since among utilitarian equilibria the minimum value is minimised at the libertarian equilibria when , it follows by induction that among utilitarian equilibria the minimum value is minimised at the libertarian equilibria for . by symmetry .we now show that the libertarian strategies are the only globally optimal utilitarian strategies for . from the analysis of the two stage game for any utilitarian equilibrium .suppose .from the conditions for a utilitarian equilibrium , it follows that dx + \int_{w^{l1}_{n}}^{k^{\pi}_{n } } [ x+u_{n } - ( v^{\pi}_{n}+w^{\pi}_{n } ) ] dx \\ & > & \int_{w^{l1}_{n}}^{k^{\pi}_{n } } [ x+u_{n } - ( v^{\pi}_{n}+w^{\pi}_{n } ) ] dx \\ & = & \int_{w^{l1}_{n}}^{k^{\pi}_{n } } [ x+u_{n } - ( v^{l1}_{n}+w^{l1}_{n } ) + v^{l1}_{n}+w^{l1}_{n } - ( v^{\pi}_{n}+w^{\pi}_{n } ) ] dx > 0.\\\end{aligned}\ ] ] this inequality follows from the induction assumption , together with .it can be shown that using a similar argument for ( the first inequality in the argument becomes an equality ) .it follows by induction that for the libertarian equilibria are the only utilitarian equilibria which are globally optimal in the sense of the utilitarian criterion .in his recent paper , garnaev has extended the game model introduced in baston and garnaev to consider the situation where three skills of the candidate are taken into account .the proposed solutions to garnaev s problem are nash equilibria and stackelberg strategies , as in , and these solutions are derived in his paper .one can also construct correlated equilibria for this model , which will be the subject of further investigation .garnaev . a game - theoretical model of competition for staff between two departments .in _ game theory and mathematical economics . international conference in memory of jerzy o ( 1920 - 1998 ) _ ,page 10 pages , warsaw , 2004 .http://www.gtcenter.org/archive/conf04/downloads/conf/garnaev.pdf .a. greenwald and k. hall .correlated q - learning . in : tom fawcett & nina mishra , eds , _ proc .twentieth international conf . on machine learning ( icml-2003 ) ,august 2124 , 2003 _ , pages 242249 .the aaai press , washington dc , 2003 .d. ramsey and k. szajowski. bilateral approach to the secretary problem . in : k. szajowski & a.s .nowak , eds , _ advances in dynamic games : applications to economics , finance , optimization , and stochastic control _ , volume 7 of _ annals of the international society of dynamic games _ , pages 271284 .birkhser , boston , 2005 .ramsey and k. szajowski . correlated equilibria in markov stopping games . the main characterizations . in _ game theory and mathematical economics .international conference in memory of jerzy o ( 1920 - 1998 ) _ , warsaw , september 2004 .ramsey and k. szajowski .correlated equilibria in markov stopping games .the numerical methods and examples . in _ game theory and mathematical economics . international conference in memory of jerzy o ( 1920 - 1998 ) _ ,warsaw , september 2004 .
|
this paper deals with an extension of the concept of correlated strategies to markov stopping games . the nash equilibrium approach to solving nonzero - sum stopping games may give multiple solutions . an arbitrator can suggest to each player the decision to be applied at each stage based on a joint distribution over the players decisions . this is a form of equilibrium selection . examples of correlated equilibria in nonzero - sum games related to the staff selection competition in the case of two departments are given . utilitarian , egalitarian , republican and libertarian concepts of correlated equilibria selection are used .
|
cell s internal state is now measurable with expression data on a few thousand genes using transcriptome analysis .high - dimensional data on the gene expressions are gathered , depending on cells and environmental conditions . in spite of the increase in the available data , however , it is sometimes difficult to extract biologically relevant characteristics from them , due to the complexity in gene expression network and dynamics .indeed , the common trend in the transcriptome analysis is to uncover a set of genes that specifically respond to specific environmental changes , while discarding other high - dimensional data that are gathered .search for a simple law that governs a global change in expressions across genes has not seriously been attempted , on the other hand , biologists are traditionally interested in macroscopic quantity such as activity , plasticity , and robustness , even though these have involved qualitative , rather than quantitative , characteristics so far . at this stage , then , it will be crucial to extract such macroscopic quantities from a vast amount of the expression data available using transcriptome analysis . here , the simplest candidate for such macroscopic quantity will be the growth rate in cell population .then , can we extract some universal relationship on global gene expression changes and connect it with a macroscopic ( population ) growth rate of cell ? in searching for such universal relationship , it will be relevant to restrict cell states of our concern , just as thermodynamics , the celebrated macroscopic phenomenological theory is established by restricting our concern to thermal equilibrium . of course , a cell is not in a state of static equilibrium , but involves complex dynamics , and grows ( and divides ) in time .thus we can not apply the formulation in thermodynamics directly .however , we can instead follow the spirit in thermodynamics ; we restrict our concern to a system with steady growth state and intend to extract a common law that should hold globally to such state . considering that the cell keeps its internal state across cell divisions, it is expected that all the components grow with a common rate . as a consequence of such restriction ,then , we may hope to uncover a universal relationship across changes in gene expressions .indeed , in transcriptome analysis data , ( e.g., ) , existence of the correlation in the expression changes across a vast number of genes is suggested , which are brought about through adaptation and evolution . here, we first analyze the transcriptome data in bacteria undergoing stress , to confirm a general relationship between global changes across expression of all genes .to explain such a general relationship in a cellular state , we study a general consequence imposed by a constraint of the cellular states achieved by restricting our concern only to cells that maintain steady growth , i.e. , those cells that can grow and divide , retaining their state . within this constraint ,we derive a theoretical relationship of the changes in all components ( i.e. , expression levels of all genes ) in response to stress . following this theoretical framework, we then re - analyze transcriptome data to demonstrate the validity of our theoretical argument .* changes in gene expression under environmental stress conditions : experimental observations * and for genes in _escherichia coli_. represents the difference in the logarithmic expression level of a gene between the non - stressed and stressed conditions , where and represent two different stress strengths , i.e. , low and medium .( a ) , ( b ) , and ( c ) show the plot for osmotic pressure , heat , and starvation stress , respectively .the fitted line is obtained by the major axis method , which is a least - square fit method that treats horizontal and vertical axes equally , and is usually used to fit bivariate scatter data .the slopes are 0.57 , 0.54 , and 0.62 for ( a ) , ( b ) , and ( c ) , respectively .the expression data are obtained from . throughout the paper, we used the expression data of genes of which the expression levels under the three stress conditions as well as the original condition exceed a threshold ( ) , in order to exclude inaccurate data ( about 10% of the total genes were discarded from the analysis ) ., width=302 ] in , transcriptome analysis of _ escherichia coli _ under three environmental stress conditions , namely , osmotic stress , starvation , and heat stress , was carried out using microarrays .for each of these three conditions , three levels of stresses ( high , medium , low ) were used , so that the absolute expression levels , represented by for -th gene , are measured over a total of conditions in addition to the original ( stress - free ) condition . to study behavior of cells under steady - growth conditions ,cells were cultured for a sufficient period beyond the transient response to these stresses , after which gene expression levels were measured .note that , throughout the paper , the point of interest is cellular behavior after recovery of the steady - growth state ( which could be termed _adaptation _ , even though this does not necessarily imply the optimization of the growth rate or the genetic change ) .from these measurements , we calculated the change in gene expressions levels between the original state and that of a system experiencing environmental stress .we investigated the difference in gene expression using a log - scale ( ) , that is ( i.e. , ) for genes , where represents a given environmental condition , and represents the log - transformed gene expression level under the original condition .we adopted a logarithmic scale as changes in gene expression typically occur on this scale , and also as it facilitates comparison with the theory described below . to characterize global changes in expression induced by these environmental stresses, we plotted the relationship between the differences in expression in fig .1a - c for low and medium , where is either osmotic , heat , or starvation stress .the relationship between all possible combinations of stresses and stress strengths are presented in supplemental fig .s1 . for the same type of stress , correlates strongly over all genes , which suggests that the global trend in changes in expression levels can be represented by a small number of macroscopic variables ., i.e. , an iso- line . for different environmental conditions ,the locus in the state space follows a different iso- line .( b ) changes in expression for each gene is governed by the change in the growth rate . for different stress types ,the change is shifted while governed by ., width=264 ] * theory for the steady - growth state * to discuss changes in cellular state in response to environmental changes , we introduce a simple theory assuming a steady - growth state in a cell .when a cell grows at this steady state and reproduces itself , all the components it contains , e.g. , the proteins that are expressed , have to be approximately doubled .the abundance of each component increases at an almost equal rate over the time - scale of cell division ; if the growth rates of some components were lower than that of others , the component would become diluted over time , and after some divisions , the component would be extinct " , so that the cell state would not accommodate steady growth .for a cell to maintain the same internal state , all the components have to be synthesized at the same rate across cell divisions .this steady - growth condition has to be satisfied amidst the nonlinearity , complexity , and stochasticity of biochemical reactions .consider a cell consisting of chemical components , of which the synthesis allows it to grow and divide . in a cellular state under steady - growth conditions ,the cell number increases exponentially over time , and thus each component within the cell also increases exponentially , as is expected from the autocatalytic nature of chemicals as a set of intracellular components .hence , it is natural to assume that the abundance of components within the cell ( as well as the cell volume ) would generally grow exponentially over a cell division cycle .then , the abundance of -th component increases with , over a cell division cycle , where is the growth rate of the component .however , the steady - growth constraint under which the concentration of each component is maintained implies that for all components . as is determined through the biochemical reactions in a cell , given from -dimensional dynamics , the constraint yields constraints on the -dimensional state space ( see fig .after changes in the environment , there may be a transient period during which the cells have not yet attained this steady - growth state , but the steady state is likely to be attained over time , as long as the cell maintains all of its internal components .the growth rate itself is changed in response to a new condition , but the constraints are preserved .hence , over the long term , in response to environmental changes , the cell progresses along a one - dimensional curve in an -dimensional state space of all components .this creates a general constraint on all gene expression levels .considering that represents a vast number ( say protein species in typical cells ) , this reduction from to 1 is quite marked .naturally , cells are not always in this steady - growth state .when a cell experiences different conditions , the growth rate of each component changes so that the concentration of each component is altered .later , however , cells return to a steady growth - state with altered compositions of these components , somewhat analogous to the restrictions of thermal equilibrium state : when conditions within a system are changed , the temperature can become non - uniform .the temperature at a box can vary ( sometimes on a microscopic scale , invalidating the existence of temperature itself ) , but after approaching equilibrium , all s are equal , so that a description using a few variables again becomes possible .likewise , in our case , in the transient state could differ by component , but after recovery of steady growth , all s are equal , allowing for a macroscopic description .next , we investigate the consequence of this constraint on steady growth .consider the concentration of each component .since each component is synthesized ( or decomposed ) in relationship to other components , the temporal change in the concentration of each component is represented as a function of the concentrations of the component itself and that of others , for instance by the rate - equation in chemical kinetics .furthermore , each component , as well as the cell volume , grows at the rate .thus , the concentrations are diluted by this rate .hence , the time - change of a concentration is given by now , the stationary state is given by a fixed point condition for all .for the sake of convenience , let us denote , and . then , eq .( 1 ) can be written as with the corresponding fixed point solution in response to environmental changes , the growth rate itself changes , as does each concentration ; however , the condition requiring that is independent of for all has to be satisfied .thus , a cell has to stay at a 1-dimensional curve in the -dimensional space , under a given change in the environmental conditions ( e.g. , against changes in stress strength ; see fig .2a ) . with an environmental change , all concentrations , , and change , while the condition that is independent of is maintained as long as the cells continue steady - state growth .we assume that all the components are retained after the change in environmental conditions , and that no new component ( gene ) emerges . taken together , the cellular state is represented on an -dimensional space .now , consider intracellular changes in response to environmental changes as being represented by a set of continuous parameters , which denote environmental changes under the stress condition . for the moment , we omit the stress type . with this parameterization , the steady - growth condition leads to we consider the parameter change from to , where each changes from at , to , which is accompanied by a change from to . assuming a gradual change in the dynamics , we introduce a partial derivative of by at , which gives the jacobi matrix . now considering the condition under which the change is sufficiently small , and taking only the linear term in , we get with under the linear conditions we are concerned with , , so that holds for a constant .accordingly , we obtain where . since the latter term on the right - hand side is independent of the magnitude of , we simply have over all ( see fig. 2b ) .hence , the change in the expression in response to external change is proportional over all components in this form .this provides a possible explanation for the observed transcriptome analysis shown fig .1 . according to our theory , the proportion coefficient in the expression level should agree with the growth rate . here, for each condition , the change in the growth rate was also measured .( is either osmotic , heat , or starvation stress ) . in fig .3 , we compared the slope of the changes in gene expression , i.e. , the common ratio , with .the plot shows rather good agreement between these two . in this respect ,the theory based on steady - state growth and linearization of changes in stress applies well to the transcriptome change ., while the ordinate is the slope in .the red , green , and blue dots represent osmotic , heat , and starvation stress data , respectively , while the pair runs over different strengths of the same stress type . , width=170,height=170 ] * changes in gene expression across different types of stresses * so far , we have compared the expression levels across different strengths for the same type of stress .however , expression changes can also be compared across different stress conditions .interestingly , the genome - wide correlation of expression levels is not restricted to a change in the same stress condition . in fig . 4 , , which plots expression changes across different stress conditions ( either starvation , heat , or osmotic stress ) , correlationis still observed , even though there are more genes that deviate from the common proportionality , leading to lower correlation coefficients , as compared with the correlations observed under the same stress conditions .the correlation is also discernible for other choices of , as shown in supplemental fig .s1 , where all the correlation diagrams of across all possible stress conditions are plotted .note that such proportionality across genes has also been suggested for several experiments , over different environmental conditions .the finding of correlation , even with reduced proportionality , implies a common trend in changes in expression across many genes , which is not necessarily the result of a given stress condition , but is a concept that holds across different environmental conditions .since gene expression dynamics are very high - dimensional , this correlation suggests the existence of a strong constraint to adaptive changes in expression dynamics .below , we discuss the theoretical origin of this correlation . in eqs .( 5)-(6 ) , the environmental change is no longer represented by a scalar variable , but the environmental change involves a different direction , so that and depend on the type of environmental ( stress ) condition .hence , instead of eq .( 7 ) , we get here , the right - hand side ( rhs ) , in general , depends on each gene .this could blur the proportionality in over all genes . in the following case , however , the dependence of the rhs on is relaxed , to support approximate proportionality as indicated in fig .when and are independent of , which we denote as and , respectively , the rhs is reduced to so that the common proportionality of the change in expression holds , while the proportion coefficient is shifted from a simple ratio between the growth - rate changes .sometimes , environmental changes affect all processes , globally .for example , if temperature or nutrient resources are increased , the synthesis ( or decomposition ) rates of all reaction processes are amplified across the board .of course , there are some genes for which deviates from the above common value .if the number of such genes with a specific response is small ( and , its influence on other genes is small , i.e. , the jacobi matrix is sparse ) , then the contributions from genes with a common value makes up the major portion of the summation in the rhs of eq .if we neglect the minor contributions from a few specific genes , common proportionality could generally be maintained .indeed , only a limited number of specific genes are expected to respond directly to environmental changes . according to this approximation, the proportion coefficient deviates from by the factor .note that this correction in the proportion coefficient depends only on the type , but not on the strength of each stress .we examined this point from the transcriptome data analyzed here , by plotting the proportion coefficient in versus in fig .the correlation between and the growth rate in this figure also exists across different stress conditions . additionally , the coefficient is roughly proportional to with a proportion coefficient that is mainly determined by the pair of stress types , over different strengths . undeniably , the proportionality over different stress types is not optimal .indeed , existence of gene - specific dependence leads to scattering in ( for ) around the common proportionality by genes , and there are more genes that deviate from the common proportionality for than those for ( compare fig.4 with fig.1 ) , so that the estimation of the proportion coefficient in fig.5 is not so reliable especially for those with lower correlation coefficient . and for different stress types .the combination of stresses is ( osmotic , heat ) for ( a ) , ( osmotic , starvation ) for ( b ) , and ( heat , starvation ) for ( c ) , respectively .the strengths of the stress and are fixed as high in these figures .the slopes are 0.65 , 0.24 , and 0.36 for ( a ) , ( b ) , and ( c ) , respectively , while the correlation coefficient for each data is 0.40 ( a ) , 0.43 ( b ) and 0.54(c ) . the relationships between the changes in gene expression for all possible combinations are presented in supplemental fig .the fitted line is obtained by the major axis method as described in fig . 1 . , width=302 ] , while the ordinate is the slope in .the red dots represent data for the same stress types , while the green , blue , and purple dots show combinations of different stress types (osmotic , heat ) , ( osmotic , starvation ) , and ( heat , starvation ) , respectively .the pair runs over different strengths of all stress combinations .the size of the dots represents the correlation coefficient between and .a lower correlation indicates that the fit of the slope may be less accurate ., width=340 ]we have shown here that steady growth conditions lead to a global constraint over all gene expression patterns . with a few additional assumptions ,the proportionality in the change in expression across genes can be derived , in which the proportion coefficient is mainly governed by the change in the growth rate .these theoretical predictions were compared with several bacterial gene expression experiments , with approximate agreement .the correlation with the growth rate is also interpreted by neglecting the direct environment dependence in , i.e. , by replacing it with for most genes . in other words ,external environmental changes trigger changes in the levels of some components , which introduces a change in the growth rate . for the stationary state , only the condition is considered . with this approximation ,the term for direct environmental changes is neglected , and eq .( 7 ) follows directly , so that growth rate changes determine gene expression changes globally .indeed , the experimental data may suggest that the growth - rate makes the major contribution to changes in gene expression .this dominancy of growth - rate is , however , imperfect , so that the environment - specific term has to be taken into account to compare expression of genes across different stress conditions . in our simple approximation that neglects gene dependence of , represents the degree of the direct influence of the environment on gene expression dynamics , as compared with the influence on the growth rate . as another possible estimate of this factor , we directly measured the variance of changes in expression across genes , i.e. , , where is the average over all genes ( see supplemental fig .according to eqs .( 8) and ( 9 ) , this factor grows in proportion to ( in addition to the variances determined by the jacobi matrices , which are independent of environmental stress ) . as shown in supplemental fig .s2 , the factor decreases in the order of osmotic stress , starvation , and heat stress .indeed , the deviation from in fig . 5 is consistent with the above order of .furthermore , the environment - specific response of gene expression generally depends on each gene . here, genes that show specific responses to a given environment may be few , while most others may not be influenced directly by the environment ; their expression levels may be mostly determined by the homeostatic growth condition .distinguishing such homeostatic genes from those that show specific responses to individual environmental stresses will be important as the next step in the statistical analysis of adaptation . in eq.(1 ), we have not assumed any specific form with particular dependence upon some protein species . in recent study ,specific dependence of the fraction of ribosomal proteins upon the growth rate is discussed by adopting a description by few degrees of protein groups . it will be interesting to introduce some specific genes in our formulation while keeping high - dimensional expression dynamics .it is also interesting to note that gene expression changes across genes correlate between environmental and genetic perturbations .in fact , ying et al. measured the changes in gene expression induced by the environmental perturbation , and the genetic perturbation induced by external reduction of several genes .again , they observed a strong correlation between and across genes ( see fig . 5 of ) .indeed , our theory can also be applied to adaptive evolution , in which growth rate is first reduced by encountering a novel environment , and then recovers by genetic changes through evolution , so that . according to our relationship, changes in gene expression levels introduced by a new environment is reduced through adaptive evolution , i.e. , , as discussed . in other words , there is a common homeostatic trend for the expression of most genes to return to the original level , as extensively observed experimentally .a few issues should be considered prior to the application of the theory presented here .first , it must be assumed that the components continue to exist , and that novel components do not appear . under this condition ,the postulate for common generally holds , even though the linear approximation does not .however , even if some components do become extinct or novel components emerge , the constraint may still exist for other components , and the proportionality relationship eq .( 7 ) holds approximately , as long as the influence of the extinct or emerging components is limited .second , it is assumed that the fixed - point of eq .( 1 ) is not split by bifurcation .when bifurcation occurs , we can apply our theory along each branch ( under the condition that the inverse jacobi matrix exists ) , but direct comparisons can not be made across different branches .moreover , in some cases , the attractor of the expression level is not a fixed - point , but is an oscillatory state .however , as long as the oscillation period is shorter than the cell division time , one can use the average of the period , instead of , leaving the present argument valid .third , we adopted a linearization approximation to obtain eq .( 7 ) . for larger changes in external conditions, there will be a gene - specific correction to the linear relationship eq .however , linearization is adopted after taking the logarithm of gene expression levels , so that the size of may not be so restrictive when seen in the original scale of gene expression .indeed , the agreement with the theory shown in figs .1 and 3 for the same stress indicates that the linearization approximation is valid , even though the growth rate is reduced to less than half of the original .the present theory facilitates description of a cellular system with only few macroscopic variables , for characterization of adaptation and evolution .furthermore , our theory with regards to common can be applied to any system of stationary growth .as presented , each element represents a replicating molecule within a cell , but , similarly , we can apply our theory by using such an element to describe cells of different types within an organism . alternatively , macroscopically , one can assign an element as a population of each species in a stationary ecosystem .the multi - level constraint of the steady - growth condition across a hierarchy is an important concept for elucidating global relationships in complex - systems biology .ying bw , seno s , kaneko f , matsuda h , yomo t ( 2013 ) .multilevel comparative analysis of the contributions of genome reduction and heat shock to the escherichia coli transcriptome . _ bmc genomics _14(1 ) : 25 .
|
cells adapt to different conditions by altering a vast number of components , which is measurable using transcriptome analysis . given that a cell undergoing steady growth is constrained to sustain each of its internal components , the abundance of all the components in the cell has to be roughly doubled during each cell division event . from this steady - growth constraint , expression of all genes is shown to change along a one - parameter curve in the state space in response to the environmental stress . this leads to a global relationship that governs the cellular state : by considering a relatively moderate change around a steady state , logarithmic changes in expression are shown to be proportional across all genes , upon alteration of stress strength , with the proportionality coefficient given by the change in the growth rate of the cell . this theory is confirmed by transcriptome analysis of _ escherichia coli _ in response to several stresses . * popular summary * _ cells consist of a vast number of components whose concentrations are now measurable by means of transcriptome analyisis for gene expressions . then , is it possible to extract biologically relevant features such as cellular growth , adaptation , and differentiation from such high - dimensional data ? can we uncover a universal law that governs across these high - dimensional data of gene expression levels ? here , recall that thermodynamics achieved a description by just few macroscopic variables from the motion of an immense number of molecules , by restricting our concern to thermal equilibrium . of course , cells are not in equilibrium . instead , they grow and divide , while keeping their concentrations of components at an approximately same level , in a steady - growth state . if we restrict our concern to such cells under a steady - growth condition , it implies that all the intracellular components are approximately doubled before cell division . from this constraint , a general law governing changes in gene expression during adaptation to environmental changes is derived theoretically ; according to this law , changes in the expression of each gene are shown to be highly correlated , with a proportion coefficient determined by the growth rate of the number of cells ; this is confirmed from transcriptome data of bacteria , _ escherichia coli _ under different levels and types of environmental stresses . these correlated changes represent cellular homeostasis in response to environmental changes , set a constraint on high - dimensional changes in expression , represented by a single quantity , i.e. , the cell growth rate , and facilitate a macroscopic description of cells during adaptation and evolution . _
|
a series of space very long baseline interferometry ( vlbi ) experiments in which an orbiting radio telescope satellite was used for vlbi observations has provided successful development of very high spatial resolution in astronomy . in particular , the first dedicated space vlbi project , vsop , with the halca satellite achieved remarkable scientific results based on spatial resolution up to 1.2 and 0.4 mas at 1.6 and 5 ghz , respectively , with an apogee altitude of 21375 km . to come after the successful vsop , the next space vlbi mission , vsop-2 ,is being planned by institute of space and astronautical science ( isas ) .this project will launch a satellite radio telescope ( srt ) , which will be equipped with a 9.1-m off - axis paraboloid antenna and dual - polarization receivers to observe at 8.4 , 22 , and 43 ghz , together with terrestrial radio telescopes ( trts ) with the sensitivity being a factor - of - ten higher than vsop .launch is planned for 2012 . with two intermediate frequency ( if ) bands , each with a two - bit sampled 128-mhz a total 256-mhz bandwidth will be available .the achievable maximum baselines will exceed 37800 km with the planned apogee altitude of 25000 km , and the highest spatial resolution will be 38 at 43 ghz .millimeter - wave observations in space vlbi will be a frontier in astrophysics because there are various compact objects for which very high spatial resolution is essential . there is , however , a difficulty in millimeter - wave vlbi in terms of the fringe phase stability .data - averaging of the fringe is usually performed within a certain time scale , the so - called coherence time , for which a root - mean - square ( rms ) of the fringe phase is less than one radian . in the conventional calibration scheme in vlbi with a fringe - fitting technique ,it is necessary that the fringe be detected in less than the coherence time .the coherence time in vlbi at 43 ghz is limited to a few minutes by stochastic variations in the fringe phase , mainly due to the turbulent media of the earth s atmosphere ; it is thus difficult to conduct a long - time averaging in vlbi in order to improve the signal - to - noise ratio ( snr ) .although celestial radio waves received on a satellite are not affected by the atmosphere , the fringe of a space baseline ( a combination of orbiting and terrestrial telescopes ) also suffers from the atmospheric phase fluctuations because one of the elements is inevitably a terrestrial radio telescope .phase referencing is a successful phase - calibration scheme for vlbi . herea scientifically interesting target source is observed with an adjacent reference calibrator with fast antenna pointing changes ( antenna switching ) in order to compensate for any rapid phase fluctuations due to the atmosphere ( ; ) .phase referencing can also remove long - term phase drifts due to geometrical errors and smoothly variable atmospheric delay errors , as well as any instability of the independent frequency standards .the phase referencing technique has been proved for imaging faint radio sources that can not be detected with the conventional vlbi data reduction ( ; ; ) .various astrometric observations in the vlbi field have also been made with the phase referencing technique to obtain the relative positions with the accuracies on the order of 10 ( ; ; ; ; reid et al . 1999 ; ) .phase referencing will be useful for vsop-2 .to investigate which component of errors has a more significant influence than others on the quality of the synthesized images obtained with vsop-2 phase referencing , and to give feedback for designing the satellite system , a software simulation tool for space vlbi has been developed .here we report on the effectiveness of vsop-2 phase referencing by simulation .the basic ideas of phase compensation with phase referencing are described in section 2 .the residual phase errors after phase compensation are discussed in section 3 based on quantitative estimations . in section 4 , the newly developed space vlbi simulator for this study is described .simulation work for vsop-2 phase referencing is presented to clarify the constraints on specific observing parameters : the separation angle between a target and a calibrator ; the time interval to switch a pair of sources ; the orbit determination accuracy of the satellite .further discussions are given in section 5 to describe the feasibility of phase referencing with vsop-2 .the conclusions are summarized in section 6 .in this section we describe basic ideas of phase referencing . in the following discussion a single on - source duration is referred to as a scan . in phase referencing ,alternate scans are made on the target and phase referencing calibrator .one observation period from the beginning of the calibrator scan , then the target scan and return to the beginning of the calibrator scan , is referred to as the switching cycle time , .figure [ fig:02 - 01 ] shows a schematic drawing of vsop-2 phase referencing .the correlated vlbi data reveal a time series of a complex quantity , called a fringe , which is composed of amplitude and phase including information on the visibility of a celestial object as well as various errors from instruments and propagation media . let us assume that the difference in the arrival time of a celestial radio wave between telescopes and its derivative are largely removed by subtracting their a priori values , calculated in the correlator .phase referencing is used to observe the target and a closely located calibrator in the sky .we refer to the fringe phases of the target and phase referencing calibrator as and , respectively , expressed as follows : where : the a priori phase calculated in the correlator ; , : phase errors due to the dynamic components of the troposphere and ionosphere , respectively ; , : long - term phase variations depending on the observing elevations of terrestrial telescopes due to the uncertainties of the tropospheric and ionospheric zenith excess path delays , respectively ; : phase error due to the baseline vector error coming from uncertainties of the telescope positions and erroneous estimations of the earth orientation parameters ( eop ) ; : the instrumental phase error due to the independent frequency standards , transmitting electric cables and so on ; : phase error due to the uncertainty in the a priori source position in the sky ; : visibility phase component representing the source structure ; : a contribution of the thermal noise . here, is the time that the target is observed , and , temporally apart from , is the time that the calibrator is observed .if the structure of the calibrator is well - known , three terms can be identified in equation ( [ equ:02 - 02 ] ) , as follows : where is an error term consisting of , , , , , , , and .the calibration data , , for the target at time is obtained from of the temporally closest two calibrator scans , as follows : where and are interpolated calibrator phases at .the final step of the phase compensation is carried out by subtracting from equation ( [ equ:02 - 01 ] ) , as follows : \nonumber \\ & & + \phi_{\mathrm{dtrp}}(t^{\mathrm{t } } ) + \phi_{\mathrm{dion}}(t^{\mathrm{t } } ) + \phi_{\mathrm{strp}}(t^{\mathrm{t } } ) + \phi_{\mathrm{sion}}(t^{\mathrm{t } } ) \nonumber \\ & & + \phi_{\mathrm{bl}}(t^{\mathrm{t } } ) + \phi_{\mathrm{inst}}(t^{\mathrm{t } } ) + \delta\epsilon_{\mathrm{therm}}(t^{\mathrm{t}}),\end{aligned}\ ] ] where is the phase difference between and , and is the phase difference between and .if is short enough ( typically , shorter than a few minutes at centimeter to millimeter waves ) and the calibrator is located closely enough ( typically , within a few degrees ) , the phase errors , except for the thermal noise , can almost be canceled .an uncanceled term , , gives the relative position of the target to the calibrator with a typical accuracy of much less than one mas .another aspect of the advantages of phase referencing is to eliminate the rapid time variation caused by the turbulent atmosphere .this means that the coherence time , which is limited by the atmosphere , becomes longer , so that faint radio sources can be detected by means of the long time averaging .although phase referencing is capable of removing a large amount of the fringe phase errors , residual phase errors remain after phase compensation because the target and calibrator are observed with a certain time separation , and not on the same line of sight .being different from a terrestrial baseline consisting of both terrestrial telescopes , a single space baseline includes a single line - of - sight atmospheric phase error for a terrestrial telescope , and uncertainty of a satellite trajectory in the orbit . in this sectionwe analytically estimate the residual phase errors of the space baseline after phase compensation .we attempt to characterize the distribution of phase offsets from a very large number of samples obtained with given parameters , such as uncertainties in the a priori values , switching cycle time , separation angle , zenith angle , and so on .the earth s atmosphere causes an excess path of radio waves passing through it ( thompson et al .let us distinguish two types of excess path errors in the following discussions : one is a dynamic component ( fluctuation error ) , and the other one is a nearly static component ( systematic error ) . here , we first address the atmospheric phase fluctuations .phase differences due to a turbulent medium are often characterized by a spatial structure function ( ssf ) , which is defined as the mean - square difference in the phase fluctuations at two sites separated by a displacement vector , , as follows : ^ 2 \right > , \end{aligned}\ ] ] where the angle brackets mean an ensemble average , and is a position vector . according to , the ssf of the atmospheric phase shift with kolmogorov turbulence can be approximated by : where and are referred as inner and outer scales , respectively , and is a structure coefficient . the spatial structure of the atmospheric phase fluctuations can also be modeled as a phase screen , assuming frozen flow , in which a laminar sheet with a fixed spatial pattern representing the distribution of the turbulent medium , flows at a constant speed .we use the phase screen model in the following discussions about the atmospheric phase fluctuations .the water vapor in the troposphere is neither well mixed nor regularly distributed in the lower troposphere , and therefore is not well correlated with the ground - based meteorological parameters .hence , the water vapor in the troposphere is highly unpredictable . observationally showed that the inner and outer scales of the ssf are 1.2 and 6 km , respectively , at their observing site at the very large array .the tropospheric phase fluctuations play a major role in restricting the coherence time to be quite short , especially at millimeter waves , because the refractive index is non - dispersive in centimeter - to - millimeter waves . at higher observing frequency bands ,vlbi synthesized images can be improved by removing telescope data whose weather is not good .thus , to use a large terrestrial telescope network in vlbi is important not only for ( , ) coverage , but also to have a number of telescopes where the tropospheric conditions are good . to make a successful phase connection at 22 or 43 ghz by interpolating the calibrator fringe phases, the switching cycle time should be shorter than a few minutes , and the separation angle should be smaller than a few degrees .assuming such a fast antenna switching observations for such a pair of sources , a residual phase error , , due to the dynamic troposphere for a space baseline , is expressed as follows : & \approx & \frac{2\pi\nu c_{\mathrm{n } } \sqrt{1.4h_{\mathrm{s } } } \sqrt{\sec{z_{\mathrm{g}}}}}{c } \nonumber \\ & \times & \left(\frac{v_{\mathrm{w } } t_{\mathrm{swt}}}{2 } + { h_{\mathrm{w } } \delta\theta}\sec{z_{\mathrm{g } } } \right)^\frac{5}{6},\end{aligned}\ ] ] where is the separation angle between the target and the calibrator , is the observing frequency , is the speed of light , is the structure coefficient of the ssf of the troposphere to the zenith , defined by , is the scale height of the tropospheric water vapor , is the wind velocity aloft or flow speed of the phase screen , is the height of the phase screen , and is the zenith angle at the ground surface for a terrestrial telescope .the first in equation ( [ equ:03 - 05 ] ) comes from an indication based on numerical calculations for the tropospheric phase fluctuations made by , while the second comes from a factor to project the separation between a pair of sources onto the phase screen .although is very different at different telescope sites , seasons , and weather conditions , it can be assumed that the values of 1 , 2 , and 4 m are equivalent to good , typical , and poor tropospheric conditions , respectively , with the assumption of kolmogorov turbulence ( j. ulvestad , vlba scientific memo no.20).http://www.vlba.nrao.edu / memos / sci/. assuming typical values of km and m s , we can obtain the following approximation from equation ( [ equ:03 - 05 ] ) : } \approx 27 c_{\mathrm{w } } \cdot \left ( \frac{\nu~\mathrm{[ghz]}}{43~\mathrm{ghz } } \right ) \left ( \frac{\sec{z_{\mathrm{g}}}}{\sec{45^\circ } } \right)^\frac{1}{2 } \nonumber \\ & \times & \left [ \left ( \frac{t_{\mathrm{swt}}~\mathrm{[s]}}{60~\mathrm{s } } \right ) + 0.16 \cdot \left ( \frac{\sec{z_{\mathrm{g}}}}{\sec{45^\circ } } \right ) \left ( \frac{\delta\theta~\mathrm{[deg]}}{2^\circ } \right ) \right]^\frac{5}{6},\end{aligned}\ ] ] where is a modified structure coefficient of the ssf , whose values are 1 , 2 , and 4 for good , typical , and poor tropospheric conditions , respectively .the residual phase errors with the typical parameters used in equation ( [ equ:03 - 07 ] ) and are given in table [ tbl:03 - 01 ] .ionospheric phase fluctuations are caused by irregularities of the plasma density in the ionosphere .since the extra phase due to the total electron content ( tec ) has an inverse proportionality to radio frequency , the amplitude of the ionospheric phase fluctuations becomes smaller as the observing frequency increases .in this report we focus on the temporal tec variations known as medium - scale traveling ionospheric disturbances ( ms - tids ) , which have a severe influence on the vlbi observables , especially at frequencies less than 10 ghz .ms - tids , firstly classified by are often seen at night in high- and mid - latitude areas , and thought to be caused by the thermospheric gravity sound waves at the bottom of the f - region .studies of spectra of the gravity waves in the high - latitude thermosphere by showed that monochromatic waves with a period of a few tens of minutes are present .in addition , the power spectra , ranging from 0.3 to a few milli hertz , show characteristics of kolmogorov power - law , which indicates that kinematic energy causing the power - law perturbations are injected from the monochromatic gravity sound waves .typical ms - tids in mid - latitudes have a spatial wavelength of a few hundred kilometers and a propagation velocity of around 100 ms from high- to low - latitudes .we here attempt to make a model for ionospheric phase fluctuations with the assumption of a phase screen with kolmogorov turbulence driven by the ms - tids .the residual phase error , , due to the dynamic ionosphere for a space baseline , is expressed as follows : & \approx & \frac{2\pi \kappa c_{\mathrm{i } } \sqrt{\sec{z_{\mathrm{i}}}}}{c\nu } \nonumber \\ & \times & \left(\frac{v_{\mathrm{i } } t_{\mathrm{swt}}}{2 } + h_{\mathrm{i}}\delta\theta\sec{z_{\mathrm{i } } } \right)^\frac{5}{6 } , \end{aligned}\ ] ] where m , is the structure coefficient of the ssf of the ionospheric tec to the zenith , is the flow speed of the screen , and is the phase screen height . is the zenith angle of the ray at , as follows : where is the earth radius .as described in subsubsection [ tec_data_analyses ] , the value of for a 50-percentile condition is roughly tecu at mid - latitudes , where tecu is the unit used for tec ( 1 tecu electrons m ) .we can obtain the following approximation from equation ( [ equ:03 - 08 ] ) for the 50-percentile condition with assumptions of ms and km ( bottom of the f - region ) : } \approx 0.46 \cdot \left ( \frac{\sec{z_{\mathrm{i}}}}{\sec{43^\circ } } \right)^{\frac{1}{2 } } \left ( \frac{\nu~\mathrm{[ghz]}}{43~\mathrm{ghz } } \right)^{-1 } \nonumber \\ & \times & \left [ 0.21 \cdot \left ( \frac{t_{\mathrm{swt}}~\mathrm{[s]}}{60~\mathrm{s } } \right ) + \left ( \frac{\sec{z_{\mathrm{i}}}}{\sec{43^\circ } } \right ) \left ( \frac{\delta\theta~\mathrm{[deg]}}{2^\circ } \right ) \right]^\frac{5}{6}.\end{aligned}\ ] ] note that of is the zenith angle at km when is .the residual phase errors with the typical parameters used in equation ( [ equ:03 - 10 ] ) are shown in table [ tbl:03 - 01 ] .let us focus on the atmospheric excess path errors after subtracting the dynamic components , which we assume to be temporally stable during an observation for up to several hours .the systematic errors are mainly caused by the uncertainty of the tropospheric water vapor constituent , and an inaccurate estimate of vertical tec ( vtec ) .hereafter , the elevation dependence of the atmospheric line - of - sight excess path ( mapping function ) is approximated by at the height of a homogeneously distributed medium .the residual phase error , , for a space baseline , due to an inaccurate estimate of the tropospheric zenith excess path , is expressed as } & \approx & \frac{2\pi\nu \delta l_{\mathrm{z } } \delta z_{\mathrm{g}}\sec{z_{\mathrm{g}}}\tan{z_{\mathrm{g}}}}{c},\end{aligned}\ ] ] where is the tropospheric systematic error of the excess path length to the zenith , and is a difference of the zenith angles between the target and the calibrator . assuming a of 3 cm ( reid et al .1999 ) and , the following approximation can be obtained from equation ( [ equ:03 - 11 ] ) : \approx 76 \cdot \left ( \frac{\nu~\mathrm{[ghz]}}{43~\mathrm{ghz } } \right ) \left ( \frac{\delta l_{\mathrm{z}}~\mathrm{[cm]}}{3~\mathrm{cm } } \right ) \nonumber \\ & & \times \left ( \frac{\delta\theta~\mathrm{[deg]}}{2^\circ } \right ) \left ( \frac{\cos{z_{\mathrm{g}}}}{\cos{45^\circ } } \right)^{-1 } \left ( \frac{\tan{z_{\mathrm{g}}}}{\tan{45^\circ } } \right ) .\end{aligned}\ ] ] the residual phase errors with the typical parameters used in equation ( [ equ:03 - 12 ] ) are shown in table [ tbl:03 - 01 ] .a residual phase error for a space baseline due to an inaccurate tec measurement is expressed as follows : } & \approx & \frac{2\pi\kappa \delta i_{\mathrm{v } } \delta z_{\mathrm{f } } \sec{z_{\mathrm{f}}}\tan{z_{\mathrm{f}}}}{c\nu},\end{aligned}\ ] ] where is the vtec systematic error , is the zenith angle at the altitude of the electron density peak ( typically , 450 km ) , and is the difference of the zenith angles between the target and the calibrator .a tec measurement technique with the global positioning system ( gps ) has been used to correct for any line - of - sight excess path delays due to the ionosphere ( e.g. , c. walker & s. chatterjee 1999 , vlba scientific memo no.23;http://www.vlba.nrao.edu / memos / sci/ . ]for example , global ionospheric modeling with gps generally has an accuracy of 310 tecu , or at 1020% level .let us assume and of to obtain the following approximation from equation ( [ equ:03 - 13 ] ) : } \approx 2.7 \cdot \left ( \frac{\nu~\mathrm{[ghz]}}{43~\mathrm{ghz } } \right)^{-1 } \nonumber \\ & \times & \left ( \frac{\delta i_{\mathrm{v}}~\mathrm{[tecu]}}{6~\mathrm{tecu } } \right ) \nonumber\\ & \times & \left ( \frac{\delta\theta~\mathrm{[deg]}}{2^\circ } \right ) \left ( \frac{\cos{z_{\mathrm{f}}}}{\cos{41^\circ } } \right)^{-1 } \left ( \frac{\tan{z_{\mathrm{f}}}}{\tan{41^\circ } } \right ) .\end{aligned}\ ] ] the residual phase errors with the typical parameters used in equation ( [ equ:03 - 14 ] ) are given in table [ tbl:03 - 01 ] .one of the special issues related to the space vlbi is the satellite orbit determination ( od ) error . in vsop ,the precisely reconstructed od of halca has an accuracy of 25 m with the -band range and range - rate , and the -band doppler measurements , which is the best accuracy achieved by the doppler tracking .however , the typical accuracy of terrestrial telescope positions is around 1 cm . for vsop-2 ,as discussed in subsection [ od_method ] , cm - order od accuracy will be achieved by using an on - board gps receiver .the accuracies of the eop solutions given by international earth rotation and reference systems service ( iers ) are typically 0.1 mas in the terrestrial pole offset , 0.3 mas in the celestial pole offset , and 0.02 ms in the ut1 offset .these uncertainties may cause the additional displacement of a few cm for the space baseline .a residual phase error , , for a space baseline due to the baseline error is approximated as follows : } \approx \nonumber \\ & & \frac{\sqrt{2}\pi\nu\delta\theta}{c } \sqrt{\delta p_{\mathrm{trt}}^2 + \delta p_{\mathrm{srt}}^2 + b^2\delta\theta^2},\end{aligned}\ ] ] where is the uncertainty of a terrestrial telescope position adopted in the correlator , is a displacement of the od error of a satellite radio telescope , is the projected baseline length to the celestial sphere at the source , and is the eop error. equation ( [ equ:03 - 15 ] ) can be expressed by the following approximation : } \ \ \approx\ \ 13 \cdot \left ( \frac{\nu~\mathrm{[ghz]}}{43~\mathrm{ghz } } \right ) \left ( \frac{\delta\theta~\mathrm{[deg]}}{2^\circ } \right ) \nonumber \\ & & \times \left [ \left ( \frac{\delta p_{\mathrm{trt}}~\mathrm{[cm]}}{1~\mathrm{cm } } \right)^2 + \left ( \frac{\delta p_{\mathrm{srt}}~\mathrm{[cm]}}{1~\mathrm{cm } } \right)^2 \right .\nonumber \\ & & + \left . 5.9\cdot \left ( \frac{b~\mathrm{[km]}}{25,000~\mathrm{km } } \right)^2 \left ( \frac{\delta\theta~\mathrm{[mas]}}{0.2~\mathrm{mas } } \right)^2 \right]^{\frac{1}{2}}.\end{aligned}\ ] ] the residual phase errors with the typical parameters used in equation ( [ equ:03 - 16 ] ) are given in table [ tbl:03 - 01 ] . in vlbi observations ,instrumental phase errors are caused by changes in the electrical path lengths in transmitting cables , gravity deformation of antenna structures , depending on the observing elevation , and independent frequency standards controlling vlbi station clocks .those phase errors usually show slow systematic drifts , depending on the ambient temperature and the observing elevation angles ; it is difficult to predict this behavior from physical models with the accuracy required for the vlbi data analysis .phase referencing observations with fast antenna switching for a closely located pair of sources can cancel almost all of the instrumental phase errors . in this reportwe do not consider the contributions of such phase errors .if there is little information about the position and structure of the chosen phase referencing calibrator , fringe phase errors are induced in the phase compensation .the residual phase error , , due to the positional error of a calibrator , , is approximated as } & \approx & \frac{\sqrt{2}\pi\nu\delta\theta b{\delta}s^{\mathrm{c}}}{c}. \end{aligned}\ ] ] the positions of celestial objects are determined in the international celestial reference frame ( icrf ; ) .the icrf is defined by a set of extragalactic radio sources ( icrf sources ) with simple and/or well - known structures ( ; ) .the icrf sources have a typical astrometric accuracy of 0.3 mas at -band ( 2.3 ghz ) and -band ( 8.4 ghz ) based on the aggregation of geodetic and astrometric vlbi observations .although the icrf sources are well distributed over the sky , the sky coverage is not sufficient to provide suitable phase referencing calibrators . to make up for this , astrometric vlbisurvey activities have progressed to find new phase referencing calibrators .a typical astrometric accuracy of the new calibrator candidates is better than 5 mas .the astrometric accuracy of those sources will be improved by repeated astrometric observations in the future .equation ( [ equ:03 - 19 ] ) can be expressed with the following approximation : } } & \approx & 46 \cdot \left ( \frac{\nu~\mathrm{[ghz]}}{43~\mathrm{ghz } } \right ) \left ( \frac{\delta\theta~\mathrm{[deg]}}{2^\circ } \right ) \nonumber \\ & & \times \left ( \frac{b~\mathrm{[km]}}{25,000~\mathrm{km } } \right ) \left ( \frac{\delta s^{\mathrm{c}}~\mathrm{[mas]}}{0.3~\mathrm{mas } } \right).\end{aligned}\ ] ] the residual phase errors with the typical parameters used in equation ( [ equ:03 - 20 ] ) are given in table [ tbl:03 - 01 ] .there may be very few unresolved sources at space vlbi resolution .there is another possibility to use galactic water maser or silicon monoxide maser sources as phase referencing calibrators , but these galactic maser emissions may also be seriously resolved at vsop-2 spatial resolution . because the contributions of source structures to the phase errors are dealt with case by case, we do not discuss them in this report .the phase error , , due to thermal noise in phase referencing for a pair of vlbi telescopes is given by & = & \frac{\sqrt{2}\ k\ \overline{t_{\mathrm{sys } } } } { \eta \overline{a_{\mathrm{e } } } \sqrt{\delta \nu } } \sqrt { \frac{1}{s^{\mathrm{t}2 } t^{\mathrm{t } } } + \frac{1}{2s^{\mathrm{c}2 } t^{\mathrm{c } } } } \ \ \ \ , % % } \end{aligned}\ ] ] where is boltzmann s constant , is the system noise temperature , is the effective aperture , is the coherence factor for the vlbi digital data processing , and is the observing if bandwidth ; and are the flux densities of the target and calibrator , respectively ; and are the scan durations of the target and calibrator , respectively .bars over the parameters represent the geometric mean of two telescopes .the factor of 1/2 of comes from the interpolation process in which two neighboring calibrator scans are used for the phase compensation of a target scan between the two . assuming two - bit sampling in analogue - to - digital ( a / d ) conversions and using a system equivalent flux density ( sefd ) , represented by , we obtain the following approximation : } = 1.6\times10^{-5 } \cdot \left ( \frac{\delta\nu~\mathrm{[mhz ] } } { 256~\mathrm{mhz } } \right)^{-\frac{1}{2 } } \nonumber \\ & & \times \left [ \left ( \frac{t^{\mathrm{t}}~\mathrm{[s]}}{10~\mathrm{s } } \right)^{-1 } \left ( \frac{\overline{s_{\mathrm{sefd}}}~\mathrm{[jy ] } } { s^{\mathrm{t}}~\mathrm{[jy ] } } \right)^2 \right .\nonumber \\ & & + \left .\left ( \frac{2t^{\mathrm{c}}~\mathrm{[s]}}{10~\mathrm{s } } \right)^{-1 } \left ( \frac{\overline{s_{\mathrm{sefd}}}~\mathrm{[jy ] } } { s^{\mathrm{c}}~\mathrm{[jy ] } } \right)^2 \right]^{\frac{1}{2 } } , % % } \end{aligned}\ ] ] where is the geometric mean of the sefds of two telescopes .p17mmp16mmp16mmp16 mm error item & + & 8.4 ghz & 22 ghz & 43 ghz + & & & + & & & + & & & + & & & + & & & + & & & +there are some conflicting issues in phase referencing . for example, the geometrical and atmospheric systematic errors causing an image distortion can be reduced by selecting a closer calibrator . on the other hand ,brighter calibrators are often preferable because the larger is the thermal noise in the calibrator fringe , the less successful will the phase connection be .if there are several calibrator candidates around a target , which is better for phase referencing , the closer , but fainter , or the brighter , but more distant ? in the end , the important thing is to select the optimum combination at the observing frequency in order to make the residual phase errors as small as possible .constraints on the separation angle or the switching cycle time for a single space baseline can be evaluated from the approximations described in the previous section . however , for the image synthesis with a large amount of ( , ) samples with multi - baselines , it is hard to predict the image quality from the approximations .in addition , cycle skips in the phase connection of the calibrator fringe phases , often occur as the observing frequency becomes higher , and/or as the switching cycle time becomes longer .degradation in the image quality due to the cycle skips can hardly be predicted by an analytical method . in order to verify the effectiveness of phase referencing with vsop-2, we developed a simulation tool called aris ( astronomical radio interferometor simulator ) . in this sectionwe first introduce what and how vlbi errors are simulated .demonstrations of vsop-2 phase referencing observations performed with aris are also shown .we then focus on the imaging performance in vsop-2 phase referencing under realistic conditions so as to determine the allowable observing parameters , such as the switching cycle time , the separation angle , the od accuracy of the satellite , the tropospheric condition , and the calibrator flux density . in aris the tropospheric phase fluctuations are modeled as a phase screen assuming kolmogorov turbulence .this simple model is useful when considering the interferometric phase fluctuations due to the troposphere .the grid interval of the screen is set to 1 m. since , in aris , the inner and outer scales can be selected among ( grid interval ) , where is a natural number , those scales are fixed to 1024 and 8192 m for the inner and outer scales , respectively .the phase screen is simulated for each terrestrial telescope site , and flows at a constant speed of 10 m s along west wind .when one screen passes over the line - of - sight of a terrestrial telescope , another new one is created from the edge of the previous screen as a seed so as not to generate an unnatural gap between them .the altitude of the phase screen is 1 km , and the elevation dependence of the amplitude of the fluctuation is achieved by multiplying a factor of as shown in equation ( [ equ:03 - 05 ] ) . a typical simulated time series of the tropospheric fluctuations and the allan standard deviation are shown in figure [ fig:04 - 01 ] .( 80mm,80mm)figure-02.epsi for ionospheric phase fluctuations , we consider two components in the tec fluctuations : one is the ms - tid , and the other is a phase screen and assuming kolmogorov turbulence driven by the ms - tid .the former is sinusoidal waves with a spatial wave length of 200 km and a propagating speed of 100 m s at an altitude of 300 km .the latter has a grid interval of 150 m , and flows at the same speed and at the same altitude of the ms - tid .the inner scale of the screen is set to 76.8 km so as not to be over the spatial wave length of the ms - tid , and no transition region from the inner to outer scales of the ssf is provided in aris .although there can be cases in which a single ionospheric phase screen covers separate terrestrial telescopes , the phase screen is independently simulated for each terrestrial telescope site . the simulated ms - tid and phase screens are transmitted from the geographical poles to the equator along the longitudes .recall that kolmogorov turbulence is assumed to be driven by the ms - tid , so that the amplitude of kolmogorov turbulence is proportional to that of the ms - tid . to determine the balance of the amplitudes between them , we analyzed short - term vtec fluctuations , based on , of tec data taken with a japanese gps receiver network operated by geographical survey institute ( gsi ) of japan . in this studywe kept the balance so that has its maximum value of tecu when the amplitude of the ms - tid is 1 tecu , the largest ms - tid amplitude often observed at mid - latitudes .since the influence of the ionospheric phase fluctuations is generally smaller than that of the tropospheric phase fluctuations at the vsop-2 observing frequency bands , we did not pay attention to various ionospheric conditions . instead, the time evolution of the structure coefficient versus the local time and season for the northern hemisphere , as shown in figure [ fig:04 - 02 ] , is given .this figure was obtained from an analysis based on with a smoothing process by means of an elliptical gaussian fitting . from the gps tec analysis ,the 50-percentile condition of is 1.6 tecu m , as shown in figure [ fig:04 - 03 ] . in aris , an unusual ionospheric status , such as a large - scale tid, plasma bubbles often observed at low - latitudes , and geomagnetic storms , is not considered .the elevation dependence of the amplitude of the fluctuations is achieved by multiplying a factor of .the simulated time series of the ionospheric fluctuations at 43 ghz , observed with a space baseline , and the allan standard deviation are shown in figure [ fig:04 - 04 ] . [ cols="^ " , ] at 22 and 43 ghz , more sophisticated phase calibration schemes may be needed in addition to phase referencing to circumvent less calibrator availability in vsop-2 .one of the solutions is to observe with fainter calibrators .assuming a terrestrial radio telescope with a 100-m diameter , calibrators with the flux densities of about 50 mjy will be available at 43 ghz .we conducted another monte carlo simulation at 43 ghz with the minimum calibrator flux density of 50 mjy and found that the probability was improved from 20% to 25% .however , because large telescopes can switch the sources much more slowly , it is not effective to use large telescopes in phase referencing at higher observing frequency bands . showed another possibility to use faint calibrators in phase referencing : they demonstrated their well - organized phase referencing observations , called bigradient phase referencing , using a combination of a very closely located faint calibrator and another bright but rather separate calibrator .fringes of the target and faint calibrator are detected using the phase referencing technique with the bright calibrator .the long - term phase variations , because of the separation to the bright calibrator , are then calibrated with the closely located calibrators .although their demonstrations were made at the -band , the proposed method is promising for vsop-2 at the higher observing frequency bands . a phase - calibration technique with a water vapor radiometer ( wvr ) is also expected to use fainter calibrators in phase referencing . in this method a wvr is mounted on a terrestrial telescope to measure the amount of water vapor along the line of sight .an example of the successful application of this method is described by .the wvr phase calibration technique will be able to achieve longer switching cycle time by removing the tropospheric phase fluctuations .this leads to longer scan durations for both target and calibrator in phase referencing observations , so that we can obtain a higher signal - to - noise ratio for faint calibrators .a longer switching cycle time provides further support to use large terrestrial telescopes with rather slow slew speeds .another solution is to observe calibrators at lower frequency bands where a larger number of calibrator candidates is available .the refractivity of the water vapor is almost constant ( non - dispersive ) for radio waves .thus , tropospheric phase fluctuations can be calibrated between different frequencies .vlbi phase referencing experiments with different frequencies between two sources have been successfully demonstrated by .however , since the ionospheric excess path delay is dispersive , it is not possible to correct the ionospheric phase errors with calibrator phases at different frequencies .additional aris simulations were conducted for this multi - frequency phase referencing , and it was found that this method works for the observing frequency combination of 43 and 22 ghz for the target and calibrator , respectively . on the other hand , when calibrator s frequency is 8.4 ghz , large phase offset and fluctuated time variation remain in the compensated 22- or 43-ghz fringe phases due to the ionospheric excess path delays .we have to note that , during summer nights , much larger tec disturbances than the 50-percentile amplitude are often observed , as indicated in figure [ fig:04 - 02 ] . it should also be noted that the influence of the ionosphere may be severer than expected above , because the solar activity is likely to reach its maximum in about 2012 , when the vsop-2 satellite is planned to be launched .the above methods introduced here can improve the effectiveness of phase referencing under limited situations .it is important to carefully consider which method works most effectively for the target if some of them are available .phase referencing has been quite successful when a pair of sources are so close that both of them are observed in the same beam of each radio telescope .in such very fortuitous cases the formal errors of the relative position measurements are as low as several ( e.g. , ; ) . since halca does not have an ability to change the antenna pointing so quickly , a so - called in - beam phase referencing has been carried out in vsop ( ; ; ) .a technical challenge in the attitude control of the satellite will be made in vsop-2 to provide a powerful solution for regularly switching maneuvering . for phase referencing at 43ghz the satellite is required to repeatedly maneuver between two celestial sources separated by a few degrees every a few tens of seconds , and observe sources with an attitude stability of .since such a switching maneuvering is hardly achievable with standard satellite reaction wheels ( rws ) , two control moment gyros ( cmgs ) with a single - gimbaled flywheel spinning at a constant angular rate are planned to be added for fast attitude switching .a cmg is a momentum - exchange device that can produce large output torque on the satellite body by rotating the gimbal axes .the switching maneuvers around the two orthogonal axes for a pair of sources ( roll and pitch ) are made by controlling the torques provided by the two cmgs while four rws mainly control the attitude around the axis to the sources ( yaw ) by generating control torques .the switching maneuvers during a phase referencing observation are the round trip between a pair of sources , so that the two cmgs generate the torques to switch back and forth with no net change in the total angular momentum of the cmg system by an operational symmetry .numerical simulations for the rigid body with the cmgs showed that a switching maneuver within 15 seconds and tracking the sources during a scan is possible for the 60-s switching cycle time together with a wide range gyroscope .an antenna dynamical model has also been developed in a feasibility study of the switching maneuvering because the deployable main reflector and boom connecting the reflector and satellite body will be major sources to excite eigen - frequencies causing the attitude disturbance .the satellite will be designed so as not to excite the eigen - frequencies lower than 0.25 hz for stabilizing the pointing between the switching maneuvers .a high - speed maneuvering capability with the rws is important to increase the efficiency of the operation of the space vlbi .the capability of the attitude maneuvering over large angles with a rate of s will also be useful for fringe - finder scans interleaved in a phase referencing observation .a highly accurate od , with a positional accuracy of better than several centimeters , is required for vsop-2 phase referencing .this requirement is two orders of magnitude better than the od accuracy achieved for halca by doppler tracking .one of the possible methods to achieve the od required for vsop-2 phase referencing is to use the on - board gps receiver : by using on - board gps navigation systems , the topex / poseidon satellite launched in 1992 to measure the ocean surface level achieved the od accuracy of 23 cm , and grace , to measure the center of the gravity of the earth , achieved an od accuracy of about 12 cm .these satellites are in relatively low - earth orbits , so that more than several gps satellites , whose altitudes are km , are always available .when the user altitude is higher than km , it will be outside of the main beam illumination of a given gps satellite , because the beam width of the gps transmitting antenna is designed to illuminate near - earth users . at altitudes near the vsop-2 satellite apogee , zero to only three gps satellitescan be available at any given time , even with the on - board gps receiver antenna system covering all directions .to strengthen the orbit determination , high - quality accelerometers can be used together with the gps measurements .accelerometry will connect the orbit positions and the velocities over periods of time when the gps measurements are unable to provide good solutions . showed in their covariance analysis for the vsop-2 od that the conventional gps navigation with accelerometry can achieve an orbit formal error well below 1 cm for the vsop-2 satellite in all the three components near perigee . at higher altitudes , however , the od error grows to about 2 cm with the assumption of an on - board accelerometer of 1 nm s accuracy due to the lack of gps measurements at these altitudes .another possibility is to have a gps - like signal transmitter on the vsop-2 satellite .it is expected that , in conjunction with the on - board gps receiver , the od accuracy will then be less than 1.5 cm . to further improve the od accuracy to the 1-cm level, ultra - precise accelerometers at the level of 0.1 nm s now available should be used .it should be noted that missions like grace were carefully designed to have the accelerometer at the center of mass of the satellite , which may be difficult to achieve for the vsop-2 satellite . to assure the 1-cm accuracy at all times , better determination of gps orbits and clocks will be required .galileo is a gps - like navigation system planned in europe , to be fully operational in 2008 .the constellation consists of 30 satellites whose orbits are the circular with the altitude of 23616 km .galileo satellites will be equipped with the hydrogen maser time standards and are expected to achieve more precise od .this system is more effective for vsop-2 and the 2.5-cm level od can be achieved only with the use of gps / galileo receivers on the vsop-2 satellite .p17mmp16mmp16mmp16 mm & 8.4 ghz & 22 ghz & 43 ghz + & mjy & mjy & mjy + & & & + probability & 86% & 43% & 20% +the effectiveness of phase referencing with vsop-2 was verified in detail with a newly developed software simulation tool , aris .simulations with aris show that phase referencing with vsop-2 is promising at 8.4 ghz for all of the tropospheric conditions , while at 22 and 43 ghz the phase referencing observations are recommended to be conducted under good and typical tropospheric conditions . at 22 and 43ghz there is another difficulty in terms of the calibrator choice : our aris simulations show that it is safe to choose a phase referencing calibrator with the expected signal - to - noise ratio on a space baseline larger than 4 for a single calibrator scan , but such a bright calibrator can not always be found closely enough to a given target at 22 and 43 ghz .the specification requirements of the satellite in terms of the maneuvering capability and od were obtained from our investigations . at 22 and 43 ghz , one - minute or shorter switching capability is required , while a few minute or longer switching cycle times may be used at 8.4 ghz .an accuracy of the orbit determination of less than cm is required for the mission .current studies concerning the vsop-2 satellite design indicate prospects that it is not easy , but not impossible , to achieve .although the atmospheric systematic errors can not perfectly be removed with the a priori values calculated in the correlator , those phase errors can be corrected in well - organized phase referencing observations along with multiple calibrators .note that the satellite does not need to observe multiple calibrators in a short period , because the systematic errors are related to the terrestrial telescopes .if the atmospheric systematic errors can be successfully removed , a few centimeter od accuracy will be targeted so that the performance of vsop-2 phase referencing will be greatly improved . in this reportwe have demonstrated the usefulness of aris in investigating the effectiveness of vsop-2 phase referencing .aris will also be convenient to check vlbi observation plans from the viewpoint of image quality . in this report we considered some of the intended cases of vsop-2 phase referencing observations for point sources with the highest spatial resolution ;further investigation can be made for an individual source .this is important for the vsop-2 scientific goals , especially at 22 and 43 ghz , because the phase referencing technique can not always be used at those frequency bands in terms of finding calibrators .aris will give helpful suggestions to comprise the effective observation and operation plans for the best performance in vsop-2 .the authors made use of the gps tec data taken by gsi and provided by kyoto university .the authors made use of the vlba calibrator catalogue of nrao and 2005f_astro catalogue of nasa gfsc .the authors express their hearty thanks to all members of vsop-2 project team , especially , h. hirabayashi of isas who is leading the next space vlbi working group and m. inoue of naoj space vlbi project office .the authors also express their thanks to s - c .wu and y. bar - sever of the jet propulsion laboratory for their investigations of the vsop-2 satellite od .y. asaki gives his thanks to t. ichikawa of isas for useful suggestions about the satellite od , k. noguchi of nara women s university for discussion about the ionospheric tec fluctuations , and l. petrov of nasa gsfc for discussion about the vlbi compact radio source surveys , and d. jauncey for comments about this work .asaki , y. , saito , m. , kawabe , r. , morita , k - i . , & sasao , t. 1996 , radio sci . , 31 , 1615 asaki , y. , shibata , k. m. , kawabe , r. , roh , d - g . , saito , m. , morita , k - i . , & sasao , t. 1998 , radio sci . , 33 , 1297 bartel , n. , herring , t. a. , ratner , m. i. , shapiro , i. i. , & corey , e. 1986 , , 319 , 733 bartel , n. , & bietenholz , m. f. 2000 , in astrophysical phenomena revealed by space vlbi , ed . h. hirabayashi , p. g. edwards , & d. w. murphy ( isas , sagamihara ) , 17 beasley , a. j. , & conway , j. e. 1995 , in very long baseline interferometry and the vlba , ed. j. a. zensus , p. j. diamond , & p. j. napier ( asp conf .82 ) , 327 beasley , a. j. , gordon , d. , peck , a. b. , petrov , l. , macmillan , d. s. , fomalont , e. b. , & ma , c. 2002 , , 141 , 13 bristow , w. a. , & greenwald , r. a. 1997 , , 102 , 11585 brunthaler , a. , reid , m. j. , falcke , h. , greenhill , l. j. , & henkel , c. 2005 , science , 307 , 1440 carilli , c. l. , & holdaway , m. a. 1999 , radio sci . , 34 , 817 cao , x , & jiang ., d. r. , 2002 , mnras , 331 , 111 cotton , w. d. 1995 , in very long baseline interferometry and the vlba , ed .j. a. zensus , p. j. diamond , & p. j. napier ( asp conf .82 ) , 189 doi , a. , et al .2006 , , 58 , 777 dravskikh , a. f. , & finkelstein , a. m. 1979 , , 60 , 251 fey , a. l. , et al .2004 , , 127 , 3587 fomalont , e. 1995 , in very long baseline interferometry and the vlba , ed .j. a. zensus , p. j. diamond , & p. j. napier ( asp conf .ser . , 82 ) , 363 fomalont , e. b. , petrov , l. , macmillan , d. s. , gordon , d. , & ma , c. 2003 , , 126 , 2562 gallimore , j. f. , & beswick , r. 2004 , , 127 , 239 gallimore , j. f. , baum , s. a. , & odea , c. p. 2004 , , 613 , 794 georges , t. m. 1968 , journal of atmospheric and terrestrial physics , 30 , 735 guirado , j. g. , ros , e. , jones , d. l. , lestrade , j. -f . , marcaide j. m. , prez - torres , m. a. , & preston , r. a. 2001 , , 371 , 766 gwinn , c. r. , taylor , j. h. , weisberg , j. m. , & rawley , r. a. 1986 , , 91 , 338 hirabayashi , h. , et al .1998 , science , 281 , 1825 hirabayashi , h. , et al .2000 , , 52 , 955 hirabayashi , h. , et al .2004 , , 5487 , 1646 ho , c. m. , wilson , b. d. , mannucci , a. j. , lindqwister , u. j. , & yuan , d. n. 1997 , radio sci . , 32 , 1499 lestrade , j. -f . ,rogers , a. e. e. , whitney , a. r. , niell , a. e. , phillips , r. b . , & preston , r. , 1990 , , 99 , 1663 ma , c. , et al .1998 , , 116 , 516 marcaide , j. m. , & shapiro , i. i. 1983 , , 88 , 1133 marvel , k. b. , & woody , d. p. 1998, , 3357 , 442 mccarthy , d. d. , & petit , g. ( ed . ) 2004 , iers technical notes , 32 , ch.5 middelberg , e. , et al .2005 , , 433 , 897 migenes , v. , et al .1999 , , 123 , 487 murphy , d. , preston , r. , fomalont , e. , romney , j. , ulvestad , j. , greenhill , l. , & reid , m. 2005 , in future directions in high resolution astronomy , ed .j. d. romney , & m. j. reid ( asp conf .ser . , 340 ) , 575 napier , p. j. 1995 , in very long baseline interferometry and the vlba , ed .j. a. zensus , p.j. diamond , & p. j. napier ( asp conf .82 ) , 59 niell , a. e. 1996 , , 101 , 3227 noguchi ., k. , imamura , t. , oyama , k .- i ., & saito , a. 2001 , radio sci ., 36 , 1607 petrov , l. , kovalev , y. y. , fomalont , e. , gordon , d. 2005 , , 129 , 1163 petrov , l. , kovalev , y. y. , fomalont , e. b. , gordon , d. 2006 , , 131 , 1872 porcas , r. w. , rioja , m. j. , machalski , j. , & hirabayashi , h. 2000 , in astrophysical phenomena revealed by space vlbi , ed .h. hirabayashi , p. g. edwards , & d. w. murphy ( isas , sagamihara ) , 245 pradel , n. , charlot , p. , & lestrade , j .- f .2006 , , 452 , 1099 reid , m. j. , readhead , a. c. s. , vermeulen , r. c. , & treuhaft , r. n. 1999 , , 524 , 816 rioja , m. , dodson , r. , porcas , r. , suda , h. , & colomer , f. 2005 , in proceedings of the 17th working meeting on european vlbi for geodesy and astrometry , ed .m. vennebusch , & a. nothnagel ( inaf istituto di radioastronomia sezione di noto , italy ) , 125 ros , e. , marcaide , j. m. , guirado , j. c. , sardn , e. , & shapiro , i. i. 2000 , , 356 , 357 saito . , a , fukao , s. , & miyazaki , s. 1998 , , 25 , 3079 shapiro , i. i. , et al .1979 , , 84 , 1459 smith , k , pestalozzi , m. , gdel , m. , conway , j. , & benz , a. o. 2003 , , 406 , 957 sudou , h. , iguchi , s. , murata , y. , & taniguchi , y. 2003 , science , 300 , 1263 tatarskii , v. i. 1961 , wave propagation in a turbulent medium ( dover , new york ) , ch.1 thompson , a. r. , moran , j. m. , & swenson , jr ., g. w. 2001 , interferometry and synthesis in radio astronomy ( a wiley - interscience publication , john wiley & sons , inc ., new york ) , ch.13 treuhaft , r. n. , & lanyi , g. e. 1987 , radio sci ., 22 , 251 wu , s - c . , &bar - sever , y. 2001 , proc .ion gps 2001 ( salt lake city , utah ) , 2272
|
the next - generation space vlbi mission , vsop-2 , is expected to provide unprecedented spatial resolutions at 8.4 , 22 , and 43 ghz . in this report , phase referencing with vsop-2 is examined in detail based on a simulation tool called aris . the criterion for successful phase referencing was to keep the phase errors below one radian . simulations with aris reveal that phase referencing achieves good performance at 8.4 ghz , even under poor tropospheric conditions . at 22 and 43 ghz , it is recommended to conduct phase referencing observations under good or typical tropospheric conditions . the satellite is required to have an attitude - switching capability with a one - minute or shorter cycle , and an orbit determination accuracy higher than cm at apogee ; the phase referencing calibrators are required to have a signal - to - noise ratio larger than four for a single scan . the probability to find a suitable phase referencing calibrator was estimated by using vlbi surveys . from the viewpoint of calibrator availability , vsop-2 phase referencing at 8.4 ghz is promising . however , the change of finding suitable calibrators at 22 and 43 ghz is significantly reduced ; it is important to conduct specific investigations for each target at those frequencies .
|
the voter model describes the evolution toward consensus in a population of agents , each of which can be in one of two possible opinion states . in an update event ,a randomly - selected voter adopts the state of a randomly - selected neighbor . as a result of repeated update events , a finite population necessarily reaches consensus in a time that scales as a power law in ( with a logarithmic correction in two dimensions ) . because of its simplicity and its natural connection to opinion dynamics, the voter model has been extensively investigated ( see , e.g. , ) .the connection with social phenomena has also motivated efforts to extend the voter model to incorporate various aspects of social reality , such as , among others , stubbornness / contrarianism , multiple states , internal dissonance , individual heterogeneity , environmental heterogeneity , vacillation , and non - linear interactions .these studies have uncovered many new phenomena that are still being actively explored .our investigation was initially motivated by recent social experiments of centola , who studied the spread of a specific behavior in a controlled online network where _ reinforcement _ played a crucial role .reinforcement means that an individual adopts a particular state only after receiving multiple prompts to adopt this behavior from socially - connected neighbors .these experiments found that social reinforcement played a decisive role in determining how a new behavior is adopted .previous research that has a connection with this type of reinforcement mechanism include the q - voter model , where multiple same - opinion neighbors initiate change , the naming game , and the ab model .an example that is perhaps most closely connected to reinforcement arises in the noise - reduced voter model , where a voter keeps a running total of inputs towards changing opinions , but actually changes opinions only when this counter reaches a predefined threshold .a similar notion of reinforcement arises in a model of fad and innovation dynamics and in a model of contagion spread .the use of multiple discrete opinions is not the only option for incorporating varying opinion strength .previous models have used a continuous range of opinions quantifying the tendency for an agent to change its opinion . for example , in the bounded confidence model , an agent can possesses an opinion in a continuous range , with the spatial distance between points representing the difference in those opinions . in this paper , we study how reinforcement affects the dynamics of the voter model . in our__confident voter model _ _, we assume that agents possess some modicum of intrinsic confidence in their beliefs and , unlike the classic voter model , need multiple prompts before changing their opinion state .we investigate a simple realization of this confident voting in which each opinion state is further demarcated into two substates of different confidence levels .the basic variables are thus the opinion of each voter and the confidence level with which this opinion is held . for concreteness , we label the two opinion states as plus ( p ) and minus ( m ) .thus the possible states of an agent are and for confident and unsure plus agents , respectively , and correspondingly and for minus agents ( fig . [ model ] ) .the new feature of confident voting is that a confident agent does not change opinion by interacting with an agent of a different opinion .instead such an agent changes from being confident to being unsure of his opinion . on the other hand , an unsure agent changes opinion by interacting with any agent of the other opinion , as in the classic voter model .we define two variants of confident voting that accord with common anecdotal experience ( fig . [ model ] ) . in the _ marginal _ version , an unsure agentthat changes opinion still remains unsure .such an agent is often labelled a `` flip - flopper '' , a routinely - invoked moniker by american politicians to characterize political opponents .figuratively , an agent who switches opinion remains ambivalent about the new opinion state and can switch back . in the _ extremal _ version ,an unsure agent becomes confident after an opinion change .such an agent `` sees the light '' and therefore becomes fully committed to the new opinion state .this behavior is typified by paul the apostle , who switched from being dedicated to finding and persecuting early christians to embracing christianity after experiencing a vision of jesus .the basic variables are the densities of the four types of agents .we use to denote both the agent types and their densities . in the mean - field description ,a pair of agents is randomly selected , and the state of one the two agents , chosen equiprobably , changes according to the voter - like dynamics illustrated in fig .[ model ] .we now outline the time evolution for the two variations of the confident voter model . for writing the rate equations ,we first enumerate the possible outcomes when a pair of agents interact : * or ; or ; * or ; or ; * or ; or .that is , the interaction between two unsure agents of opposite opinions ( ) leads to no _ net _ density change , as in the classic voter model .however , when two confident agents of different opinions meet ( ) , one of the agents becomes unsure .the next two lines account for interactions between agents of the same opinion but different confidence levels .we assume that an unsure agent exerts no influence on a confident agent by virtue of the latter being confident , while a confident agent is persuasive and converts an unsure agent to confident .finally , the last line accounts for an unsure agent changing opinion upon interacting with a confident agent of a different opinion .the corresponding rate equations are : with parallel equations for and that are obtained by interchanging in eq .( [ rem ] ) .the rate equation for the total density of plus agents is and from the complementary equation for , it is evident that the total density of agents is conserved , . for the extremal version , we again enumerate the possible outcomes when a pair of agents interact .these are : * or ; or ; * or ; or ; * or ; or .the point of departure , compared to the marginal version , is that a voter is now confident in its new opinion state upon changing opinion .the rate equations corresponding to these steps are : with parallel equations for and .the rate equation for the total density of plus agents is the same as that for the marginal version , so that again the total density of agents is manifestly conserved . for both variants of the confident voter model , the time evolutionis dominated by the presence of a saddle point that corresponds not to consensus , but a balance between plus and minus agents . for nearly - symmetric initial conditions , the densities of the different speciesare initially attracted to this unstable fixed point , but eventually flow to a stable fixed point that corresponds to consensus .however when the initial condition is perfectly symmetric between plus and minus agents , then the population is driven to a mixed state that corresponds to the symmetric saddle point ( fig .[ non - symm ] ) .it is instructive to first study the initial conditions and . the rate equations ( [ rem ] ) for the marginal version of confident voting now reduce to , with solution \,,\\\nonumber p_1(t)&=\frac{1}{2}-p_0(t)\,.\end{aligned}\ ] ] thus in an initially symmetric system , confident voters are slowly eliminated because there is no mechanism for their replenishment , and all that remains asymptotically are equal densities of unsure voters . for the extremal version of confident voting , the rate equations ( [ ree ] ) reduce to with .because the quadratic polynomial on the right - hand side of eq .( [ dotp0 ] ) is positive for and negative for , the fixed point at is stable .thus approaches exponentially in time .we solve for by a partial fraction expansion to give which indeed gives an exponential approach to the final state of .thus all four voting states are represented in the long - time limit .if the initial condition is slightly non - symmetric , then numerical integrations of the rate equations clearly show that the evolution of the densities turns out to be controlled by two distinct time scales a fast time scale that is and a longer time scale that is , where is the population size .to incorporate in the rate equations , we interpret these equations as describing the dynamics of voters that live on a complete graph of sites , so that every agent interacts equiprobably with any other agent . in this framework ,consensus on the complete graph should be viewed as the density of a single species being equal to in the rate equations .similarly , an initial small deviation from the symmetric initial conditions in the rate equations ( i.e. , and , with ) , should be interpreted as the departure from a symmetric state by a single particle on a complete graph of sites . , , and .,title="fig:",scaledwidth=42.5% ] , , and .,title="fig:",scaledwidth=42.5% ] in the marginal model ( fig .[ non - symm](a ) ) , the system begins to approach the point algebraically in time , as discussed above . for a slightly asymmetric initial condition ,the densities remain close to this unstable fixed point for a time that numerical integration shows is of order .ultimately , the system is driven to the fixed point that corresponds to the initial majority opinion .for the extremal model , qualitatively similar behavior occurs , except that in the initial stages of evolution the system is quickly driven towards the fixed point at and .this fixed point is a saddle node , with one stable and two unstable directions ( fig .[ non - symm](b ) ) .thus for nearly - symmetric initial conditions , the densities remain close to this fixed point for a time of the order , after which the densities are suddenly driven to one of the two stable fixed points , either or . , ( dashed arrow ) that terminates in a symmetric fixed point ( circle ) . shown in ( b )are the unstable ( circle ) and stable ( dots ) fixed points . for both cases , two representative flows that start from nearly symmetric initial conditionsare shown.,title="fig:",scaledwidth=40.0% ] , ( dashed arrow ) that terminates in a symmetric fixed point ( circle ) .shown in ( b ) are the unstable ( circle ) and stable ( dots ) fixed points .for both cases , two representative flows that start from nearly symmetric initial conditions are shown.,title="fig:",scaledwidth=40.0% ] the full state space is the composition tetrahedron , which consists of the intersection of the set with the normalization constraint plane ( fig .[ tetra - marginal ] ) .each corner corresponds to a pure system that is entirely comprised of the labeled species .for the marginal version , there are only two stable fixed points at and , corresponding to consensus of either confident plus voters or confident minus voters .there is also a fixed line , defined by , where the population consists only of unsure agents .this fixed line is locally unstable except at the point .thus if the system starts along the symmetry line defined by and , the system flows to the final state of .however , near - symmetric initial states execute a sharp u - turn and eventually flow to one of the consensus fixed points or , as illustrated in fig .[ tetra - marginal ] .for the extremal version , qualitatively similar dynamics arises , except that instead of a fixed line , there is an unstable fixed point at and .nearly symmetric initial states first flow to this unstable fixed point and remain in the vicinity of this point for a time scale that is of order , after which the densities quickly flow to the consensus fixed points , either or .we now investigate confident voting dynamics when voters are situated on the sites of a finite - dimensional lattice of linear dimension ( with ) , with periodic boundary conditions . for the classic lattice voter model, it was found that the consensus time asymptotically scales as in one dimension , as for , and as for .the presence of the logarithmic factor for and the lack of dimension dependence for shows that the critical dimension for the classic voter model .the confident voter model has quite different dynamics because the magnetization is not conserved , except in the symmetric limit and , whereas the average magnetization is conserved in the classic voter model . herethe magnetization is defined as the difference in the densities of plus and minus voters of any kind .the absence of this conservation law leads to an effective surface tension between domains of plus and minus voters .consequently confident voting is closer in character to the kinetic ising model with single - spin flip dynamics at low temperatures rather than to the classic voter model . and domain .voters that change their state are shown green .after one more step , a sharp domain wall that is translated by lattice spacing is re - established ., scaledwidth=80.0% ] in the simplest case of one dimension , the agents organize at long times into domains that are in a single state and the evolution is determined by the motion of the interface between two dissimilar domains .thus we consider the evolution of a single interface between two semi - infinite domains for example , one in state and the other in state . by enumerating all possible ways that the voters at the interface can evolve ( fig .[ 1dwall ] ) , we find that the domain wall moves one site to the left or to the right equiprobably after four time steps .thus isolated interfaces between domains undergo a random walk , but with the domain wall hopping at one - fourth the rate of a symmetric nearest - neighbor random walk .similarly , we determine the fate of two adjacent diffusing domain walls by studying the evolution of a single voter in state in a sea of voters . by again enumerating the possible ways these two adjoining interfaces evolve , we find that the domain walls annihilate with probability and move apart by one lattice spacing with probability .additionally , we verified that the distribution of survival times for a single confident voter in a sea of opposite - opinion voters scales as , as in the classic voter model .we also studied the analogous single - defect initial condition for unsure voters . in all such cases ,the long - time behavior is essentially the same as in the classic voter model , albeit with an overall slower time scale .finally , we confirmed that the time to reach consensus starting from an arbitrary initial state scales quadratically with .thus the one - dimensional confident voter model at long times exhibits the same evolution as the classic voter model , but with a rescaled time . in our simulations of confident voting in two dimensions ,we typically start a population with exactly one - half of the voters in the confident plus state and one - half in the confident minus state , with their locations randomly distributed .periodic boundary conditions are always employed .for both the marginal and the extremal versions of confident voting , appears to grow algebraically in , with an exponent that is visually close to ( fig .[ t2d ] ) .however , the local two - point slopes in the plot of versus are slowly and non - monotonically varying with so that it is difficult to make a precise estimate of the exponent value . on the square lattice as a function of .for both models , the initial number of confident plus and minus voters are equal and randomly - distributed in space .the number of realizations for the largest system size is .( right ) local two - point exponent for the consensus time for the marginal and extremal models .the error bars indicate the statistical uncertainty.,title="fig:",scaledwidth=42.5% ] on the square lattice as a function of . for both models , the initial number of confident plus and minus voters are equal and randomly - distributed in space .the number of realizations for the largest system size is .( right ) local two - point exponent for the consensus time for the marginal and extremal models .the error bars indicate the statistical uncertainty.,title="fig:",scaledwidth=42.5% ] we argue that this slow approach to asymptotic behavior arises because there are two different routes by which consensus is achieved . for random initial conditions ,most realizations reach consensus by domain coarsening , a process that ends with the formation of a large single - opinion droplet that engulfs the system .however , for a substantial fraction of realizations ( roughly 38% for the extremal model and 42% for the marginal model ) , voters first segregate into alternating stripe - like enclaves of plus and minus voters ( fig .[ statepics ] ) .this feature is akin to what occurs in the two - dimensional ising model with zero - temperature glauber dynamics , where roughly one - third of all realizations fall into a stripe state ( which happens to be infinitely long lived at zero temperature ) .a similar condensation into stripe states also occurs in the majority vote model , the model , the naming game , and now the confident voter model .it is striking that this symmetry breaking occurs in a wide range of non - equilibrium systems for which the underlying dynamics is symmetric in and .it is an open challenge to understand why this symmetry breaking occurs .square lattice that reach either a stripe state ( left ) or an island state ( right ) .black and white pixels correspond to unsure plus and minus agents ; these form a sharp interface between domains of confident agents . , title="fig:",scaledwidth=25.0% ] square lattice that reach either a stripe state ( left ) or an island state ( right ) .black and white pixels correspond to unsure plus and minus agents ; these form a sharp interface between domains of confident agents ., title="fig:",scaledwidth=25.0% ] the existence of these two distinct modes of evolution is reflected in the probability distribution of consensus times ( fig .[ distt2d ] ) . starting from the random but symmetrical initial condition ,the distribution first has a sharp peak at a characteristic time that scales linearly with , and then a distinct exponential tail whose characteristic decay time scales as .the shorter time scale corresponds to the subset of realizations that reach consensus by conventional coarsening . for these realizations, the length scale of the coarsening grows as . when this coarsening scale reaches , consensus is achieved .the consensus time is thus given by ; since , we have . the longer time scale stems from the subset of realizations that fall into a stripe state before consensus is eventually reached .system on a double logarithmic scale .the initial condition is the same as in fig .[ t2d ] and the data are based on 750,000 realizations . , scaledwidth=50.0% ] to help understand the quantitative nature of the approach to consensus via the two different routes of coarsening and stripe states , we studied the confident voter model with the initial conditions of : ( i ) a large circular single - opinion island and ( ii ) a stripe state ( fig . [ t2d - combined ] ) . for the former ,the initial condition is a circular region of radius that contains agents in state , surrounded by agents in state . for the latter , agents in state occupy the top half of the system , while the bottom half is occupied of agents in state .for these two initial conditions , the consensus time grows as and as , respectively ( fig .[ t2d - combined ] ) . in the latter case ,the approach to asymptotic behavior is both non - monotonic and extremely slow ( fig .[ t2d - combined ] ) ; we do not understand the mechanism responsible for these anomalies .these limiting behaviors account for the two time scales that arise in the distribution of consensus times for a system with a random , symmetric initial condition . for an initial stripe state .the error bars indicate the statistical uncertainty.,title="fig:",scaledwidth=42.5% ] for an initial stripe state .the error bars indicate the statistical uncertainty.,title="fig:",scaledwidth=42.5% ] although the confident voter model has an appreciable probability of falling into a stripe state , such a state is not stable because the interface between the domains can diffuse .when the two interfaces of a stripe diffuse by a distance that is of the order of their separation , one stripe is cut in two and resulting droplet geometry quickly evolves to consensus .we estimate the time for two such interfaces to meet by following essentially the same argument as that developed for the majority vote model . for a flat interface ,every site on the interface can change its opinion .such an opinion change moves the local position of the interface by .for a smooth interface of length , there will therefore be of the order of opinion change events of plus to minus and vice versa .thus the net change in the number of agents of a given opinion is of the order of .consequently , the average position of the interface moves by .correspondingly the diffusion coefficient of the interface scales as .the time for two such interfaces that are separated by a distance of the order of to meet therefore scales as . in a -dimensional system ,the analog of two - stripe state is a two - slab state with a -dimensional interface separating the slabs .now the same argument as that give above leads to as the time scale for two initially flat interfaces to meet .according to this approach , the consensus time scales linearly with in the limit of , a limit that one normally associates with the mean - field limit .however , the rate equation approach gives a consensus time that grows as .we do not know how to resolve this dichotomy .we introduced the notion of individual confidence in the context of the voter model .our model is based on recent social experiments that point to the importance of multiple reinforcing inputs as an important influence for adopting a new opinion or behavior .we studied two variants of confident voting in which an agent who has just switched opinion will be either have confidence in the new opinion the extremal model or be unsure of the new opinion the marginal model . in the mean - field limit, a nearly symmetric system quickly evolves to an intermediate metastable state before finally reaching a consensus in one of the confident opinion states .this intermediate state is reached in a time of the order of one , while the time to reach consensus scales as . on a two - dimensional lattice ,a substantial fraction of all realizations of a random initial condition reach a long - lived stripe state before ultimate consensus is reached .this phenomenon appears ubiquitously in related opinion and spin - dynamics models ) and an understanding of what underlies this dynamical symmetry - breaking is still lacking . an important consequence of the stripe states is that there are two independent times that describe the approach to consensus .the shorter time , which scales linearly with , corresponds to realizations that reach consensus by domain coarsening .the longer time corresponds to realizations that get stuck in a metastable stripe state before ultimately reaching consensus .an unexpected feature of confident voting is that the behavior in two dimensions , where the consensus time varies as a power law in , is drastically different than that of the mean - field limit , where varies logarithmically with . in contrast , in the classic voter model , in two dimensions , whereas the mean - field behavior is .this dichotomy suggests that confident voting on the complete graph does not correspond to the limiting behavior of confident voting on a high - dimensional hypercubic lattice .moreover the argument that on a -dimensional hypercubic lattice scales as suggests that the upper critical dimension for confident voting is infinite .
|
we introduce the confident voter model , in which each voter can be in one of two opinions and can additionally have two levels of commitment to an opinion confident and unsure . upon interacting with an agent of a different opinion , a confident voter becomes less committed , or unsure , but does not change opinion . however , an unsure agent changes opinion by interacting with an agent of a different opinion . in the mean - field limit , a population of size is quickly driven to a mixed state and remains close to this state before consensus is eventually achieved in a time of the order of . in two dimensions , the distribution of consensus times is characterized by two distinct times one that scales linearly with and another that appears to scale as . the longer time arises from configurations that fall into long - lived states that consist of two ( or more ) single - opinion stripes before consensus is reached . these stripe states arise from an effective surface tension between domains of different opinions .
|
driven by the need of gravitational wave data analysis for waveform templates , numerical relativity has focused in recent years on the modelling of astrophysical sources of gravitational waves such as the inspiral and coalescence of compact objects .such systems do not possess any symmetries and thus require a fully dimensional numerical code .the advantage of assuming a spacetime symmetry , on the other hand , is that it allows for a dimensional reduction of the einstein equations , which reduces the computational effort considerably so that greater numerical accuracy can be obtained . while spherical or planar symmetry yields the greatest reduction in computational cost , the intermediate case of axisymmetry is more interesting in that it permits the study of gravitational waves . in this articlewe focus on vacuum axisymmetric spacetimes and assume that the killing vector is hypersurface orthogonal so that there is only one gravitational degree of freedom .the axisymmetric einstein equations can be simplified considerably by choosing suitable gauge conditions . herewe consider a combination of maximal slicing and quasi - isotropic gauge .this gauge reduces the number of dependent variables to such an extent that only one pair of evolution equations corresponding to the one gravitational degree of freedom needs to be kept .all the other variables can be solved for using the constraint equations and gauge conditions .this _ fully constrained _ approach was taken in . _partially constrained _ schemes ( e.g. , ) substitute some of the constraint equations with evolution equations ; this is possible because the einstein equations are overdetermined .such ( fully or partially ) constrained schemes have proven very robust in simulations of strong gravity phenomena .examples include the collapse of vacuum axisymmetric gravitational waves , so - called brill waves , in .critical phenomena in this system were found in .critical phenomena in the collapse of massless scalar fields and complex scalar fields with angular momentum were also studied , as was the collapse of collisionless matter . nevertheless , constrained evolution schemes have been plagued with problems .the authors of reported that their multigrid elliptic solver failed occasionally for the hamiltonian constraint equation in the strong - field regime .this problem can be circumvented by using instead the evolution equation for the conformal factor .however , it was found that this was `` not sufficient to ensure convergence in certain brill - wave dominated spacetimes '' .similar difficulties were encountered in .the purpose of this article is to determine the cause of these problems and to develop an improved constrained evolution scheme .the suspect elliptic equations belong to a class of ( nonlinear ) helmholtz - like equations , which are discussed quite generally in section [ s : ell ] .we point out that if these helmholtz equations are _ indefinite _ ( loosely speaking , they have the `` wrong sign '' ) then their solutions , should they exist , are potentially nonunique .the same criterion is found to be related to the convergence of numerical solvers based on classical relaxation methods . in section [s : formulation ] , we review the partially constrained scheme of and the fully constrained scheme of . we show that some of the elliptic equations in these formulations are indefinite .this leads us to the construction of a modified fully constrained scheme that does not suffer from this problem .the arguments involved turn out to be closely related to questions of ( non)uniqueness in conformal approaches to the initial data problem in standard numerical relativity .a numerical implementation of the new fully constrained scheme is described in section [ s : nummethod ] . in section [ s :numresults ] , we apply it to a study of brill wave gravitational collapse . after performing a convergence test and comparing results for a strong wave with `` spherical '' initial data , we focus on a highly prolate configuration one of the initial data sets examined in . by considering sufficiently prolate configurations ,the authors were able to construct initial data without an apparent horizon but apparently with arbitrarily large curvature .they conjectured that such initial data would evolve to form a naked singularity rather than a black hole .this would constitute a violation of weak cosmic censorship .a numerical evolution of one of these prolate initial data sets was carried out in . due to a lack of resolution on their compactified spatial grid, the authors could not evolve the wave for a sufficiently long time .the trends in certain quantities suggested however that an apparent horizon would eventually form .using our new constrained evolution scheme , we are able to evolve the same initial data for much longer and we confirm that an apparent horizon does form . we conclude and discuss some open questions in section [ s : concl ] .let us first consider the _ helmholtz equation _ where , is the flat - space laplacian , and and are smooth functions .we impose the boundary condition at spatial infinity .more generally , a boundary condition can always be transformed to this case by considering the function .it follows from standard elliptic theory ( see e.g. ) that has a unique solution if everywhere . if then multiple solutions may exist or there may not be any solution at all . for the elliptic equation is said to be _next we consider the quasilinear equation where is a smooth ( not necessarily linear ) function . proving existence and uniqueness of solutions to this equation is nontrivial .however a necessary condition for uniqueness can easily be obtained .suppose is a given solution and consider a small perturbation of it , .approximating where , we find that for to be a solution of , must satisfy this is of the form with and . if then there is only the trivial solution and we call the original problem _ linearization stable_.if on the other hand then multiple solutions of the linearized problem and hence also of the nonlinear problem may exist .as an example relevant to the formulations of the einstein equations discussed in this article , we take with and a smooth function . then is linearization stable provided that .we say that in this case the equation has the `` right sign '' . because of the above considerations on the uniqueness of solutions , it is clearly desirable to have an equation with the `` right sign '' if a numerical solution is attempted .there is however also a more practical reason .consider again the linear helmholtz equation in , say , dimensions .suppose we cover the domain with a uniform cartesian grid with spacing , denoting the value of at the grid point with indices by .a discretization of using second - order accurate centred finite differences yields we formally write this system of linear equations as large systems are commonly solved using relaxation methods , which obtain a series of successively improved numerical approximations .for example , a step of the _ gauss - seidel _ method consists in sweeping through the grid ( typically in lexicographical or in red - black order ) , at each grid point solving the equation for and replacing its value , the relaxation converges provided the matrix in is _ strictly diagonally dominant _, i.e. , in each row of the matrix the absolute value of the diagonal term is greater than the sum of the absolute values of the off - diagonal terms ( see e.g. ) . in our example , the diagonal term is and the off - diagonal terms add up to , so that the condition for convergence is .( the other possibilty , large and positive , is not feasible because in the continuum limit . ) in practice , if is positive but sufficiently small then the relaxation will still converge but as is increased convergence will begin to stall and ultimately the relaxation will diverge .similar convergence criteria hold for other relaxation schemes such as the jacobi or sor methods .in particular , the multigrid method is based on these relaxation schemes and will not converge if the underlying relaxation does not .these warnings do not apply to certain versions of the conjugate gradient method or other krylov subspace iterations which ideally only require the matrix to be invertible . for a combination of such methods with multigrid see e.g. .we focus on axisymmetric vacuum spacetimes .axisymmetry means that there is an everywhere spacelike killing vector field with closed orbits .here we restrict ourselves to the case where the killing vector is hypersurface orthogonal .we choose cylindrical polar coordinates such that . in the following , indices range over , indices over , and indices over .the line element is written in the form + r^2 \rmd \phi^2 \}.\ ] ] here and are the usual adm lapse function and shift vector .we have imposed as a gauge condition that the 2-metric on the hypersurfaces be conformally flat in our coordinates ( quasi - isotropic gauge ) : the spatial metric obeys and .this condition must be preserved by the evolution equation for the spatial metric , where is the extrinsic curvature of the surfaces and denotes the lie derivative .we deduce that where .maximal slicing is imposed , so that the extrinsic curvature has three degrees of freedom , which are taken to be , , and ( this particular combination is motivated by regularity on the axis of symmetry ) .the evolution equation for the extrinsic curvature is given by where is the covariant derivative compatible with the spatial metric and is its ricci tensor .preservation of the maximal slicing condition implies ( using the hamiltonian constraint for the second equality ) or = 0.\end{aligned}\ ] ] here and in the following we use the notation there are many different ways of constructing an evolution scheme for the axisymmetric einstein equations in the above gauge , depending on the number of constraint equations being solved .we review two schemes that have been used for numerical simulations and show that some of their elliptic equations are indefinite as discussed in the section [ s : ell ] .finally we propose a new scheme that does not suffer from this problem .garfinkle and duncan choose to solve only the hamiltonian constraint equation ( note that ) , which takes the form \nonumber\\ + { { \textstyle \frac{1}{4}}}\psi^5 \rme^{2rs } \left [ { { \textstyle \frac{1}{3}}}(u + { { \textstyle \frac{1}{2}}}r w)^2 + { { \textstyle \frac{1}{4}}}(r w)^2 + ( { { k^z{}_r}})^2 \right ] = 0.\end{aligned}\ ] ] this equation is of the type , with and ( note the second square bracket in is non - negative ) .hence it has the `` wrong sign '' and suffers from potential nonuniqueness of solutions as well as difficulties in solving it numerically using relaxation methods ( section [ s : ell ] ) .the latter is not a concern in though because the authors use a conjugate gradient method .the momentum constraints are not solved but only monitored during the evolution . written outexplicitly they are the extrinsic curvature variables , and are all evolved using their time evolution equations . the slicing condition is solved in the form , and this is a helmholtz equation with the `` right sign '' ( in ; note the square bracket in is non - negative ) . in order to solve for the shift vector , additional derivatives are taken of equations , which combine to two decoupled second - order equations , these are poisson equations ( in ) and do not cause any problems .a similar formulation was developed by choptuik .their definition of the variables and differs slightly from our and , where the subscript ch refers to .this difference does not have any consequences on the properties of the elliptic equations that we are concerned with here and so for the sake of consistency we continue to use our convention ( which agrees with the one in ) . as a resultthe equations displayed below differ from those in in a minor way .in the same way as garfinkle and duncan , choptuik also solve the hamiltonian constraint , which is again indefinite .unlike garfinkle and duncan , however , they also solve the momentum constraints .this is done by replacing and with first derivatives of the shift using the gauge conditions .the momentum constraints now read the principal part of these two coupled equations is elliptic and so far there is no need for concern .a problem arises however when equations are substituted in the slicing condition , \nonumber\\ + { \textstyle \frac{2}{3 } } \psi^4 \rme^{2rs } \beta_-r w - { { \textstyle \frac{1}{2}}}\psi^4 \rme^{2rs } \alpha ( rw)^2 = 0.\end{aligned}\ ] ] the term containing the square bracket has the `` wrong sign '' , and in , .we observed that in both of the above schemes , the hamiltonian constraint was indefinite , and in the second one , the slicing condition was , too .we now present a scheme in which both equations and in fact all the elliptic equations that are being solved are definite .the hamiltonian constraint can be cured by rescaling the extrinsic curvature variables with a suitable power of the conformal factor , in terms of the new variables , the exponent of multiplying the second square bracket in is so that the equation becomes definite for .there is a preferred choice : for the terms containing derivatives of in the momentum constraints all cancel under the substitution .the same rescaling of the extrinsic curvature was applied by abrahams and evans .their scheme is however not fully constrained the extrinsic curvature variables are evolved as in .the indefiniteness of the slicing condition was caused by the substitution , more precisely by its dependence .the original motivation for this substitution was the desire to be able to solve the momentum constraints .however we can still do this as before if we introduce a new vector and set the momentum constraints are then solved for .the price we have to pay is that we still need to solve the spatial gauge conditions , where now and are expressed in terms of .that is , we have to solve two more elliptic equations than choptuik .let us now write out all the elliptic equations explicitly .the momentum constraints are the hamiltonian constraint is \nonumber\\ + { \textstyle \frac{1}{48 } } \psi^{-7 } \rme^{2rs } [ ( 2 \eta_- - r { \tilde w})^2 + 3 ( r { \tilde w})^2 + 3 \eta_+^2 ] = 0 .\end{aligned}\ ] ] the slicing condition is = 0 .\end{aligned}\ ] ] the spatial gauge conditions are = 0 , \nonumber\\ \fl \beta^z{}_{,rr } + \beta^z{}_{,zz } - \alpha \psi^{-6 } [ \eta^z{}_{,rr } + \eta^z{}_{,zz } + ( 6 p_z - a_z ) \eta_- - ( 6 p_r - a_r ) \eta_+ ] = 0 .\end{aligned}\ ] ] we note that form a hierarchy : the equations are successively solved for , , , and . after substituting the solutions of the previous equations , each equation in the hierarchycan be regarded as a decoupled scalar elliptic equation , or elliptic system in the case of . the terms in the second lines of and now have the `` right signs '' .an exception common to all the schemes discussed in this section is the term multiplying in the first line of in general one expects to oscillate so that can have either sign .this is the usual difficulty one faces in conformal formulations of the initial value equations , see section [ s : cts ] .the variable and its `` time derivative '' are evolved .this pair of evolution equations corresponds to the one dynamical degree of freedom . note that if we had not restricted the killing vector to be hypersurface orthogonal then there would be a second dynamical degree of freedom .in linearized theory these two degrees of freedom can be understood as the two polarization states of a gravitational wave .there are additional evolution equations for , and that are not actively enforced but that can be used in order to test the accuracy of a numerical implementation .all the evolution equations are given in [ s : evolutionequations ] .here we remark that assuming a solution to the elliptic equations is given , the principal part of the evolution equations is that of a wave equation , ^ 2 s \simeq \psi^{-4 } \rme^{-2rs } ( s_{,rr } + s_{,zz}),\ ] ] where denotes equality to principal parts .this equation is clearly hyperbolic , a necessary criterion for the well posedness of the cauchy problem .see also for a recent analysis of the hyperbolic part of a fully dimensional constrained evolution scheme based on the dirac gauge .our discussion of different constrained evolution schemes for the axisymmetric einstein equations is closely related to conformal approaches to solving the initial value equations in standard dimensional spacetime .here one seeks to find a spatial metric and extrinsic curvature satisfying the constraint equations on the initial slice , and often also a lapse and shift satisfying suitable gauge conditions .this is done by setting where is the conformal factor , and the conformal metric is assumed to be given . for simplicity and for analogy with the axisymmetric formulations discussed above , we impose the gauge condition .( in the axisymmetric case , we only controlled the components , . )we also assume maximal slicing throughout , and we work in vacuum . it is well known in the conformal approach that the extrinsic curvature can not be freely specified ; instead it has to be conformally rescaled , this corresponds to the proposed rescaling of the extrinsic curvature variables in the new axisymmetric scheme .the hamiltonian constraint now takes the form of the lichnerowicz equation , where is the covariant laplacian and the ricci scalar of the conformal metric .note again that the last term in has the `` right sign '' for linearization stability , cf . .as pointed out in the previous subsection , the linear term can have either sign .however , does not _ necessarily _ imply that multiple solutions exist .there is a well - developed theory for existence and uniqueness of solutions to , see . in order to solve the momentum constraints , yorks original _ conformal transverse traceless _( ctt ) method introduces a vector and sets where is the conformal killing operator defined as the momentum constraints now read in analogy with . in the ctt approach ,any gauge conditions are solved _after _ a solution of the constraint equations has been found .for example , maximal slicing implies the following elliptic equation for the conformal lapse , where is substituted .this equation has the `` right sign '' , as it has in the new axisymmetric formulation and in the one by garfinkle and duncan .in contrast , the _ extended conformal thin sandwich _( xcts ) method directly expresses the extrinsic curvature in terms of the shift , instead of . as a result , the slicing condition acquires the wrong sign .this is precisely what happens in the scheme by choptuik , cf . and .remarkably , a numerical study of the xcts equations showed that this system does admit nonunique solutions .two solutions were found for small perturbations of minkowski space , one of them even containing a black hole .the two branches meet for a certain critical amplitude of the perturbation .this parabolic branching was explained using lyapunov - schmidt theory in . because of the similarity with the xcts equations , it is conceivable that the constrained axisymmetric formulation of choptuik might show a similar branching behaviour .this is clearly undesirable for numerical evolutions because the elliptic solver might jump from one solution branch to the other during the course of an evolution .however even before this can happen the multigrid method used in will fail to converge , as explained in section [ s : ell ] .in this section we describe a numerical implementation of the new fully constrained scheme presented in section [ s : orscheme ] .the equations are discretized using second - order accurate finite differences in space .a collection of the finite difference operators we use can be found in [ s : discretization ] . similarly to , and unlike , we use a _ cell - centred _ grid to cover the spatial domain \times [ 0 , z_\mathrm{max}]$ ] : grid points are placed at coordinates , , where is the grid spacing and is the number of grid points in the direction .( corresponding relations hold in the direction . )note that no grid points are placed at the boundaries ._ ghost points _ are placed at and at .the values at these ghost points are set according to the boundary conditions , as described in the following .here we only refer to the `` physical '' grid boundaries at , , and . in the adaptive mesh refinement approachdiscussed further below , additional finer grids are added that do not cover the entire spatial domain .these finer grids are also surrounded by ghost points . on grid boundaries that do not coincide with a `` physical '' boundary, ghost point values are interpolated from the coarser grid .the boundary conditions at follow from regularity on axis ( see for a rigorous discussion ) : either a dirichlet or a neumann condition is imposed depending on whether the variable is an odd or even function of .all the equations being solved ( both the elliptic equations and the evolution equations ) are regular on the axis provided that these boundary conditions are satisfied .in addition , we impose reflection symmetry about so that the variables are either odd or even functions of , and this implies dirichlet or neumann conditions at .the and parities of all the variables are listed in [ s : discretization ] . at the outer boundaries and , we impose dirichlet conditions on the gauge variables , and . for the variables , we impose where and are spherical polar coordinates and is the value of at spacelike infinity , i.e. , and .this boundary condition obviously holds up to terms of for any asymptotically flat solution of the constraint equations . for the `` dynamical '' fields , we follow and impose a sommerfeld condition at the outer boundary , = 0 .\ ] ]this condition is only exact for the scalar wave equation in spherical symmetry .it is however expected to be a reasonable first approximation because and obey a wave equation to principal parts and the elliptic variables will be close to their flat - space values near the outer boundary ( , ) .see [ s : discretization ] for details on the discretization at the outer boundary .the evolution equations are integrated forward in time using the crank - nicholson method ; this method is second - order accurate in time .the resulting implicit equations are solved by an outer gauss - seidel relaxation ( in red - black ordering ) , and an inner newton - raphson method in order to solve for the vector of unknowns at each grid point .a typical value of the cfl number we use is .fourth - order kreiss - oliger dissipation is added to the right - hand sides of the evolution equations , with a typical parameter value of .the elliptic equations are solved using a multigrid method . the full approximation storage ( fas )variant of the method enables us to solve nonlinear equations directly , i.e. we do _ not _ apply an outer newton - raphson iteration in order to obtain a sequence of linear problems . in the relaxation step of the multigrid algorithm , a nonlinear gauss - seidel relaxation ( in red - black ordering ) is directly applied to the full nonlinear equations . at each grid point, we solve simultaneously for the unknowns , , , and .only the hamiltonian constraint requires the solution of a ( scalar ) nonlinear equation , and this is done using newton s method ; a single iteration is found to be sufficient .all interior grid points are relaxed and afterwards the values at the ghost points are filled according to the boundary conditions . in order to transfer the numerical solution between the grids, we use biquadratic interpolation for prolongation and linear averaging for restriction . for the prolate wave evolved in section [ s : prolatewave ] ,the elliptic equations become highly anisotropic and the standard pointwise gauss - seidel relaxation employed in the multigrid method no longer converges . a common cure to this problemis line relaxation .we solve for the unknowns at all grid points in a line simultaneously .one newton - raphson step is applied to treat the nonlinearity and the resulting tridiagonal linear system is solved using the thomas algorithm .note that this method has the same computational complexity as the pointwise gauss - seidel relaxation . the wide range of length scales in the solutions we are interested in necessitates a position - dependent grid resolution .the classic adaptive mesh refinement ( amr ) algorithm by berger and oliger was designed for hyperbolic equations . including elliptic equations in this approachis rather complicated . a solution with numerical relativity applications in mindwas suggested by pretorius and choptuik , and we shall use their algorithm here , with minor modifications due to the fact that our grids are cell centred rather than vertex centred .the key idea of the algorithm is that solution of the elliptic equations on coarse grids is deferred until all finer grids have reached the same time ; meanwhile the elliptic unknowns are linearly extrapolated in time and only the evolution equations are solved .we have found that this approach works well as long as no grid boundaries are placed in the highly nonlinear region . in particular, adaptive generation of finer grids in the course of the evolution causes small but noticeable reflections that from our experience make the study of problems such as brill wave critical collapse unfeasible . for this reason, the evolutions presented in this article use _ fixed _ mesh refinement ( fmr ) , i.e. the grid hierarchy is defined at the beginning of the simulation and remains unchanged as time evolves .finally we briefly discuss how an apparent horizon is found in a slice .the horizon is parametrized as a curve in spherical polar coordinates . requiring the expansion of the outgoing null rays emanating from the horizon to vanish yields a second order ordinary differential equation , which is solved using the shooting method .the boundary conditions are , i.e. , the horizon has no cusps on the axes .we follow an idea of garfinkle and duncan in order to monitor the approach to apparent horizon formation .for each point on the axis , we find the angle at which the curve starting from that point meets the axis , we find the maximum of this angle over all such curves .obviously for an apparent horizon , and the deviation from this value indicates how close we are to the formation of an apparent horizon .as an application of our numerical implementation , we consider vacuum axisymmetric gravitational waves , so - called brill waves . the initial slice is taken to be time - symmetric so that initially .we consider the same initial data for the function as in and , where , and are constants . the initial lapse and shiftare taken to be and .the momentum constraints ( [ e : ormomentumconstraints ] ) are trivially satisfied initially and only the hamiltonian constraint ( [ e : orhamiltonianconstraint ] ) needs to be solved . in order to check convergence of the code ,we first consider a wave with parameters .this will disperse rather than collapse to a black hole but is still well in the nonlinear regime .the adm mass is .we take the domain size to be .the fmr hierarchy contains three grids ( figure [ f : fmrhierarchy ] ) .all the grids contain the origin , are successively refined by a factor of , and all have the same number of grid points .fmr grid hierarchies used for the brill wave evolutions presented in this paper .top left : ( section [ s : convtest ] ) ; top right : , ( section [ s : spherwave ] ) , bottom : , , ( section [ s : prolatewave ] ) ., scaledwidth=85.0% ] fmr grid hierarchies used for the brill wave evolutions presented in this paper .top left : ( section [ s : convtest ] ) ; top right : , ( section [ s : spherwave ] ) , bottom : , , ( section [ s : prolatewave ] ) ., title="fig:",scaledwidth=85.0% ] + fmr grid hierarchies used for the brill wave evolutions presented in this paper .top left : ( section [ s : convtest ] ) ; top right : , ( section [ s : spherwave ] ) , bottom : , , ( section [ s : prolatewave ] ) ., title="fig:",scaledwidth=85.0% ] we run the simulation with three different resolutions , .this enables us to carry out a three - grid convergence test : for each variable we define a convergence factor with the indices referring to the grid spacing .the norms are discrete norms taken over the subsets of all grids in the fmr hierarchy that do not overlap with finer grids . for a second - order accurate numerical methodwe expect .figure [ f:1.0conv ] confirms that the code is approximately second - order convergent .( occasional values are not uncommon in similar numerical schemes . )three - grid convergence factors for a brill wave with computed from the three resolutions ., title="fig:",scaledwidth=95.0% ] + three - grid convergence factors for a brill wave with computed from the three resolutions ., title="fig:",scaledwidth=95.0% ] three - grid convergence factors for a brill wave with computed from the three resolutions . ,title="fig:",scaledwidth=95.0% ] + three - grid convergence factors for a brill wave with computed from the three resolutions ., title="fig:",scaledwidth=95.0% ] as noted earlier , there are additional evolution equations for the variables , and that are not actively evolved in our constrained evolution scheme .we use these to check the accuracy of the numerical implementation in the following way .we keep a set of auxiliary variables , and which are copied from their unhatted counterparts initially but evolved using the evolution equations . during the evolution , we form the differences between the two sets .doing so for two different resolutions ( grid spacings and ) allows us to define another convergence factor for each ( referred to as _ residual convergence _ in the following ) , the results in figure [ f:1.0resconv ] are again compatible with second - order convergence .we note that the residual convergence test just presented is more severe than the three - grid convergence test in the following sense . for the residuals of the unsolved evolution equations to converge as desired , not only must the numerical solution be second - order convergent but the constraint and evolution equations _ and their boundary conditions _ must be compatible .no exact boundary conditions are known at a finite distance from the source and compatibility of the boundary conditions we use is only achieved at infinity .we deliberately chose the domain size in this convergence test to be sufficiently large ( ) so that the effect of the boundary on the convergence factors is small .as another consistency test , we compute a numerical approximation to the adm mass .this is evaluated as a surface integral on a sphere close to the outer boundary , at spherical radius , where is the unit normal in the spherical radial direction , and these expressions are valid in linearized theory .we evaluate in for two radii and extrapolate to infinity assuming that . the result in figure [ f:1.0resconv ]shows how numerical conservation of the adm mass improves with increasing resolution .residual convergence factors for a brill wave with computed from two pairs of resolutions , ( dashed ) and ( solid ) .the bottom right panel shows the numerically computed adm mass for the three different resolutions , ( dotted ) , ( dashed ) and ( solid ) . , title="fig:",scaledwidth=95.0% ] + residual convergence factors for a brill wave with computed from two pairs of resolutions , ( dashed ) and ( solid ) .the bottom right panel shows the numerically computed adm mass for the three different resolutions , ( dotted ) , ( dashed ) and ( solid ) . ,title="fig:",scaledwidth=95.0% ] residual convergence factors for a brill wave with computed from two pairs of resolutions , ( dashed ) and ( solid ) .the bottom right panel shows the numerically computed adm mass for the three different resolutions , ( dotted ) , ( dashed ) and ( solid ) . , title="fig:",scaledwidth=95.0% ] + residual convergence factors for a brill wave with computed from two pairs of resolutions , ( dashed ) and ( solid ) .the bottom right panel shows the numerically computed adm mass for the three different resolutions , ( dotted ) , ( dashed ) and ( solid ) ., title="fig:",scaledwidth=95.0% ] next we consider a wave with and .we refer to this as `` spherical '' because , although of course the actual wave is not spherically symmetric . unlike the one in section [ s : convtest ] , this wave is super - critical and will collapse to form a black hole .the adm mass is .we run the simulation for two different domain sizes , the fmr hierarchy is of a similar type as in section [ s : convtest ] . on the smaller domainthere are three grids and for the larger domain we add on another coarse grid ( figure [ f : fmrhierarchy ] ) .we run the simulation with two different resolutions , . in ,the same initial data were evolved on a non - uniform grid with spacing close to the origin .this is comparable to our lower resolution grid hierarchy , which has grid spacing on the finest grid .figure [ f:8.5resconv ] shows the residual convergence factors defined in .the general trend is that the convergence factors are close to at late times but somewhat smaller at early times . moving the outer boundary further out improves convergence considerably at early times , as can be seen particularly for the variables .this demonstrates the effect of the outer boundary where imperfect boundary conditions are imposed at a finite distance . because of the elliptic equations involved in our evolution scheme , inaccuracies in the outer boundary conditions influence the entire domain instantaneously , not only after the outgoing radiation interacts with the boundary as is the case in a hyperbolic scheme .moving the outer boundary much further out by adding more coarse grids is not feasible for this evolution because of the computational cost involved in the current single - processor implementation of the code .in particular , the value of the conformal factor far out appears to be very sensitive to the dynamics close to the origin .this is not the case for , which is evolved from the initial data by a hyperbolic pde . as a result, the difference has a large , spatially nearly constant contribution that is nearly resolution independent , thus causing the convergence factor to degrade . at timeslater than those shown here , the convergence factors ultimately decrease because large gradients develop due to the grid - stretching property of maximal slicing .however , here we are only interested in the part of the evolution until just after the formation of the apparent horizon .figure [ f:8.5resconv ] also indicates that both increasing the resolution and the boundary radius improves the numerical conservation of the adm mass . for the larger domain at the higher resolution , the initial oscillations are at the level and after mass remains constant to within .residual convergence factors for a brill wave with and , for two different domain sizes .the bottom right panel shows the numerically computed adm mass for the two different resolutions , ( dashed ) and ( solid ) . , title="fig:",scaledwidth=95.0% ] + residual convergence factors for a brill wave with and , for two different domain sizes .the bottom right panel shows the numerically computed adm mass for the two different resolutions , ( dashed ) and ( solid ) ., title="fig:",scaledwidth=95.0% ] residual convergence factors for a brill wave with and , for two different domain sizes .the bottom right panel shows the numerically computed adm mass for the two different resolutions , ( dashed ) and ( solid ) . , title="fig:",scaledwidth=95.0% ] + residual convergence factors for a brill wave with and , for two different domain sizes .the bottom right panel shows the numerically computed adm mass for the two different resolutions , ( dashed ) and ( solid ) ., title="fig:",scaledwidth=95.0% ] next we evaluate the lapse function in the origin . as a consequence of the singularity avoidance property ofmaximal slicing , the lapse is expected to collapse towards zero as a strong - gravity region of spacetime is approached .our result in figure is in good agreement with and appears to be insensitive to the resolution and boundary location .we also plot the riemann invariant in the origin .the decay of this quantity after agrees roughly with , although we find a somewhat different behaviour at earlier times ( rather than increasing right from the beginning , first decreases for a short time ) .however there is a rather strong dependence on resolution and outer boundary location in this case , which indicates that the results for should be interpreted with care .lapse function and riemann invariant in the origin for a brill wave with and .results for two different domain sizes and for two different resolutions ( dashed ) and ( solid ) are shown .( the four curves nearly coincide in the left plot . ) , scaledwidth=95.0% ] lapse function and riemann invariant in the origin for a brill wave with and .results for two different domain sizes and for two different resolutions ( dashed ) and ( solid ) are shown .( the four curves nearly coincide in the left plot . ) , scaledwidth=95.0% ] finally we search for an apparent horizon . the evolution of the angle ( cf . )is shown in figure [ f:8.5horizon ] .it agrees reasonably well with , although we find that the horizon forms slightly earlier at rather than at .also shown in figure [ f:8.5horizon ] is the mass of the horizon , computed from its area . when it first forms , the horizon has mass .the numerically computed adm mass at this time is so that , as compared with reported in .after its formation the horizon expands slightly ( its mass increases by about ) and appears to ultimately settle down .the results stated here correspond to the run on the larger domain at the higher resolution and the errors are estimated by comparison with the other runs .apparent horizon finder angle ( cf . ) and mass for a brill wave with and .results for two different domain sizes and for two different resolutions ( dashed ) and ( solid ) are shown ., scaledwidth=95.0% ] apparent horizon finder angle ( cf . ) and mass for a brill wave with and .results for two different domain sizes and for two different resolutions ( dashed ) and ( solid ) are shown ., scaledwidth=95.0% ] we now turn to a highly prolate brill wave with , , and , which again has .this is one of the initial data sets considered in and it was also evolved ( until ) in .our spatial domain has size .the resolution on the base grid is taken to be and .there are grids in the fmr hierarchy .again all the grids contain the origin and the grid spacing is successively halved in both dimensions .the number of grid points in the radial direction is the same on all grids but is successively multiplied by a factor of ( approximately ) . in this way the finer grids are better adapted to the prolate shape of the initial data .the grid hierarchy is shown in figure [ f : fmrhierarchy ] .the spacing on the finest grid is and . by comparison, the grid used in has and close to the origin , roughly four times coarser .figure [ f:325origin ] shows the evolution of the lapse function and riemann invariant in the origin .these agree well with , except for the part of ( but note the strong dependence of this quantity on resolution and outer boundary location apparent from figure [ f:8.5origin ] ) .lapse function and riemann invariant in the origin for a brill wave with , and ., scaledwidth=95.0% ] lapse function and riemann invariant in the origin for a brill wave with , and ., scaledwidth=95.0% ] the approach to apparent horizon formation is shown in figure [ f:325horizon ] .we are able to evolve the wave for much longer than the authors of and we confirm their conjecture that an apparent horizon will indeed form .it first appears at and its shape is remarkably close to a sphere in our coordinates . at its formationthe apparent horizon has mass and at this time the adm mass has settled down to a value of so that .this is in accordance with the penrose inequality , which conjectures that .apparent horizon formation for a brill wave with , and .top left : horizon finder angle ( cf . ) .top right : adm mass ( solid ) and apparent horizon mass ( dashed ) . bottom : coordinate location of the apparent horizon when it first forms at ( solid ) , and at ( dashed ) ., scaledwidth=95.0% ] apparent horizon formation for a brill wave with , and .top left : horizon finder angle ( cf . ) .top right : adm mass ( solid ) and apparent horizon mass ( dashed ) . bottom : coordinate location of the apparent horizon when it first forms at ( solid ) , and at ( dashed ) ., title="fig:",scaledwidth=95.0% ] + apparent horizon formation for a brill wave with , and .top left : horizon finder angle ( cf . ) .top right : adm mass ( solid ) and apparent horizon mass ( dashed ) . bottom : coordinate location of the apparent horizon when it first forms at ( solid ) , and at ( dashed ) ., title="fig:",scaledwidth=85.0% ]we considered constrained evolution schemes for the einstein equations in axisymmetric vacuum spacetimes .one of the motivations for this work was to try and understand why the numerical elliptic solvers in some of these schemes , e.g. , failed to converge in certain situations .we found that this was related to the elliptic equations becoming indefinite .apart from the implications for numerical convergence , we also pointed out that such equations might admit nonunique solutions . in section [ s : orscheme ] , we presented a new scheme that does not suffer from this problem .its main features are a suitable rescaling of the extrinsic curvature with the conformal factor , and separate solution of the momentum constraints and isotropic spatial gauge conditions .thus the scheme involves the solution of six elliptic equations rather than four as in .given that multigrid methods can be used to solve these equations at linear complexity , this does not imply a severe increase in computational cost .our numerical implementation uses second - order accurate finite differences and combines mesh refinement with a multigrid elliptic solver , based on the algorithm of .we work in cylindrical polar coordinates . unlike in , we do not compactify the spatial domain but impose boundary conditions at a finite distance from the origin . as an application of the code , we evolved brill waves in section [ s : numresults ] .we carried out a careful convergence test in section [ s : convtest ] and demonstrated that the code is approximately second - order convergent .for a stronger super - critical wave ( section [ s : spherwave ] ) , convergence of the residuals of the unsolved evolution equations was somewhat slower at earlier times .varying the domain size indicated that this is mainly caused by inaccuracies in the outer boundary conditions we use .these errors appear to have little effect on the formation of the apparent horizon . in section [ s : prolatewave ] , we evolved a highly prolate brill wave .such initial data were conjectured in to form a naked singularity rather than a black hole , which would violate weak cosmic censorship .however an apparent horizon does form in our evolution .we thus improve on the results of the authors of , who could not evolve the wave for a sufficiently long time to see an apparent horizon form although they conjectured that this would happen eventually .there are many directions in which this work could be extended . for simplicity we only considered vacuum spacetimes with a hypersurface - orthogonal killing vector , i.e. , vanishing _twist_. the addition of matter and twist should be straightforward .care must be taken that the additional variables are rescaled with suitable powers of the conformal factor so that the hamiltonian constraint remains definite .an elegant framework capable of including twist is provided by the formalism . from a mathematical point of view, it would be interesting to prove that the cauchy problem or even the initial - boundary value problem is well posed for the present ( or a similar ) formulation of the axisymmetric einstein equations .these questions were studied for similar hyperbolic - elliptic systems in .it is a disadvantage of constrained evolution schemes that inaccuracies in the outer boundary conditions influence the entire domain instantaneously .more work is needed on improved boundary conditions in the context of mixed hyperbolic - elliptic formulations of the einstein equations .an alternative to an outer boundary at a finite distance would be the compactification towards spatial infinity used in .however , outgoing waves ultimately fail to be resolved on such a compactified grid , which because of the elliptic equations involved has again adverse effects on the entire solution .this problem is avoided if hyperboloidal slices are used , which can be compactified towards future null infinity ( see for a related review article ) . in this casethe constraint and evolution equations become formally singular at the boundary , which needs to be addressed in a numerical implementation . on the computational side ,the accuracy of the code could be improved by using fourth ( or higher ) order finite differences .computational speed could be gained by parallelizing the code and running it on multiple processors .it would be interesting to evolve even more prolate brill waves than the one considered here .however the elliptic equations then become more and more anisotropic and the relaxation method employed in the multigrid method must be modified to ensure convergence . for the wave considered in this paper , line relaxationwas found to accomplish this but we have not been able to achieve convergence for even more prolate configurations .more sophisticated modifications such as operator - based prolongation and restriction are likely to be required . in any case , in order to evolve some of the extremely prolate initial data sets considered in where in , a radically different approach will probably be needed .another interesting application of our code would be brill wave critical collapse .however , preliminary results indicate that we are currently unable to evolve waves close to the critical point for a sufficiently long time because reflections from the interior amr grid boundaries become increasingly pronounced as more and more finer grids are added close to the origin .the mesh refinement algorithm of that we adopt appeared to be sufficiently robust in the scalar field evolutions of but we suspect that the situation is quite different in vacuum collapse . an improved amr algorithm for mixed hyperbolic - elliptic systems of pdes will probably be required .the author wishes to thank sergio dain , david garfinkle , john stewart and darragh walsh for helpful discussions on this work .he gratefully acknowledges funding through a research fellowship at king s college cambridge .earlier parts of this work were supported by grants to caltech from the sherman fairchild foundation , nsf grant phy-0601459 and nasa grant nng05gg52 g .here we list the evolution equations of the formulation of the axisymmetric einstein equations presented in section [ s : orscheme ] . the variables and actively evolved , \nonumber\\ - { { \textstyle \frac{1}{2}}}\alpha \psi^{-6 } r^{-1 } [ \eta_+^2 + 2 \eta_- r{\tilde w}- 4 ( r{\tilde w})^2 ] + 4 r^{-1 } \beta^r { \tilde w}.\end{aligned}\ ] ] the variables and are not evolved but solved for using the constraint equations . however the einstein equations also imply the following evolution equations that may be used to check the accuracy of a numerical scheme . \nonumber\\ + \beta^r ( \eta^r{}_{,rz } + \eta^z{}_{,rr } + 3 r^{-1 } \eta_+ ) + \beta^z ( \eta^r{}_{,zz } + \eta^z{}_{,rz } ) - \eta_+ \beta_- \nonumber\\ + { \textstyle \frac{2}{3 } } ( \beta^r{}_{,z } - 2 \beta^z{}_{,r } ) \eta_- + { \textstyle \frac{2}{3 } } r { \tilde w}\beta_+ + { { \textstyle \frac{1}{3}}}\alpha \psi^{-6 } \eta_+ ( \eta_- + 4 r { \tilde w } ) , \\\label{e : etamevolution } \fl \eta_{-,t } = \alpha \psi^2 \rme^{-2rs } [ - \alpha^{-1 } \alpha_{,rr } + \alpha^{-1 } \alpha_{,zz } - 2 \psi^{-1 } \psi_{,rr } + 2 \psi^{-1 } \psi_{,zz } + 2 r_r ( a_r + r^{-1 } ) \nonumber\\ - 2 a_z r_z + 2 p_r ( 2 a_r + 3 p_r + 2 r_r ) - 2 p_z ( 2 a_z + 3 p_z + 2 r_z ) ] \nonumber\\ + \beta^r ( \eta^r{}_{,rr } - \eta^z{}_{,rz } + 3 r^{-1 } \eta_- ) + \beta^z ( \eta^r{}_{,rz } - \eta^z{}_{,zz } ) \nonumber\\ - { \textstyle \frac{2}{3 } } \beta_- ( 2 \eta_- - r { \tilde w } ) + ( \beta^z{}_{,r } - \beta^r{}_{,z } ) \eta_+ + { { \textstyle \frac{1}{3}}}\alpha \psi^{-6 } \eta_- ( \eta_- + 4 r { \tilde w } ) .\end{aligned}\ ] ]centred second - order accurate finite - difference operators are used at all interior grid points .we only give the expressions for derivatives in the direction ; corresponding expressions hold in the direction .the symbol denotes equality up to . / [ 4 ( \delta r)^2 ] .\end{aligned}\ ] ] at ( ) , a dirichlet condition is imposed if the variable is an odd function of ( ) and a neumann condition if it is an even function of ( ) .the parities of the various variables are as follows , the value of a variable at the ghost point is set to be if obeys a dirichlet condition and if it obeys a neumann condition ; this is set according to . ] .this discretization at the boundary is second - order accurate .dirichlet conditions at the outer boundary are implemented in a similar way . in order to impose the falloff condition, we note that is a linear function of and so we linearly extrapolate in the radial direction from the interior grid points to the ghost points in order to find the values of there .the sommerfeld condition is rewritten in the form and discretized at the outer ghost points on the base grid of the amr hierarchy in order to integrate the values of there forward in time .here backward differencing is used in the direction normal to the boundary , and similarly in the direction .
|
this paper is concerned with the einstein equations in axisymmetric vacuum spacetimes . we consider numerical evolution schemes that solve the constraint equations as well as elliptic gauge conditions at each time step . we examine two such schemes that have been proposed in the literature and show that some of their elliptic equations are indefinite , thus potentially admitting nonunique solutions and causing numerical solvers based on classical relaxation methods to fail . a new scheme is then presented that does not suffer from these problems . we use our numerical implementation to study the gravitational collapse of brill waves . a highly prolate wave is shown to form a black hole rather than a naked singularity .
|
recommender systems help to overcome information overload by providing personalized suggestions based on the user s history / user s interest .because recommender systems can increase user experience by providing more relative information or even find information user ca nt find otherwise , they are deployed on many websites , especially e - commerce websites . according to relying on the content of the item to be recommended or not , the underlying recommendation algorithms are generally classified into two main categories : content based and collaborative filtering ( cf ) recommendations . in this paper , we focus on the collaborative filtering algorithms . in the essence , they make recommendations to the current user by fusing the opinions of the users who have similar choices or tastes . it is a more natural way of personalization because people are social animals .there are always persons who share common interests with us that reliable recommendations can be made upon .the problem settings of collaborative filtering are simply described as follows .the user set is denoted as and the item set is denoted as .users give ratings $ ] to the items that they have seen indicating their preference .so the ratings form a rating matrix where the unknown ratings are left as .each row of the rating matrix represents a user and each column represents an item .the goal of collaborative filtering is therefore to choose and recommend the items that the current user would probably like most according to this rating matrix .the existing cf algorithms are divided into two categories : memory - based methods and model - based methods .memory - based cf algorithms directly use the rating matrix for recommendation .they predict the unknown preference by first finding similar users or similar items denoted as user neighbors and item neighbors , respectively ; by fusing these already known and similar ratings they can guess the unknown ratings . on the other hand , model - based cf algorithms learn an economical models representing the rating matrix .these models are refereed to as user profile or item profile .recommendation thus becomes easy and intuitive on these lower dimension attributes .whilst considered to be one of most successful recommendation methods , collaborative filtering suffers from two severe problems , namely sparsity and scalability .please note that for a single user it is impossible for her to rate all the items and it is impossible for a single item been rated by all the user either . actually , most values in the rating matrix are unknown , i.e. , .since our recommendations are solely depending on this very sparse rating matrix , how to leverage these data to generate good recommendations is challenging . on the other hand ,real - world recommender systems often have millions of users and items . for many recommendation algorithms ,the training model needs hours even days to be updated . unchanging and outdated recommendations are likely to disappoint our users .so we require the algorithms to be as fast as possible in both training and recommendation phases . in real world recommender systems ,another practical but often overlooked issue related to high quality recommendation is how to consider the evolution of user interests over time .take news recommender systems such as google personalized news for example , there are at least two reasons to consider time effects in their recommendation algorithms .first , people always want to read the latest news . to recommend a piece of news happened ten days agois not likely of equal interest as the news happened just now .( using a slicing time window to cut the old news off may be one trivial solution , but obviously not the ideal solution . )more importantly , people s tastes are always changing .a young man would like to see recommendations about digital cameras if he plans to buy one .but after he already owns it , he will not take interest in the recommendations on buying a new digital camera .so time factors are of vital importance for the success of recommender systems in many applications , especially e - commerce , advertisement and news services . to recommend using these sparse and evolving preference data , we propose a novel collaborative filtering algorithm named ant collaborative filtering ( acf ) which is inspired by ant colony optimization algorithms .similar to other swarm intelligence algorithms , acf could handle very sparse rating data by virtue of pheromone transmission and is a natural extension of other cf techniques in recommender systems in which preferences keep changing .we make an analogy of users to ants which carry specific pheromone initially .when the user rates a movie or simply reads a piece of news on the web , our algorithm links the user and the item by the mechanism of pheromone transmission from the user to the item and vice versa .so the types of pheromone and their amounts constitute a clear clue of historical preference and turn out to be a strong evidence for finding similar users and items .the remainder of this paper is structured as follows : we first introduce some preliminaries . in section [ section3 ]we present our ant collaborative filtering algorithm and in section [ section4 ] we improve this algorithm using dimension reduction technique . in section [ section5 ] , some related works are described .we then report the experimental results on two different datasets in section [ section6 ] .finally , we conclude the paper with some future works .the majority of collaborative filtering algorithms follows the rating prediction manner , i.e. , predicting the ratings for all the unseen items for the current user and then recommend the items with the highest prediction scores .yet an alternative view of the recommendation task is to generate a top list of items that the user is most likely interested in , where is the length of final recommendation list . in thisregard , collaborative filtering can be directly cast as a relevance ranking problem .these two types of algorithms are called rating - based and ranking - based recommendations , respectively .one of the disadvantages of rating - based cf algorithms is that they can not make good recommendations in the situation of implicit user preference data .we should notice that explicit user rating data that rating - based algorithms rely on are not always available or are far from enough . in most cases ,users express their preference by implicit activities ( such as a single click , browsing time , etc . ) instead of giving a rating . as the consequence of lack of these rating data ,rating - based cf algorithms can not work properly . in this sense ,ranking - based algorithms are as important as rating - based algorithms , if not more important . proposed an item ranking algorithm by computing their similarity with the items that the customers have already bought . proposed a learning to rank algorithm that can find a function that correctly ranks the items as many as possible for the users .for every item pair and , if user likes more than , we have , otherwise the ranking error increases .although similar algorithms have been applied to ranking problem for search engine , the size of both users and items in recommender systems makes this personal ranking algorithm intimidating for implementation . in general ,ranking - based cf hasnt been paid enough attention to in academia although it is already popular in commercial recommender systems compared to rating - based recommendation .we deem the reason for this is that it is relatively difficult to define a proper loss function as their rating prediction counterpart . in this paper ,both types of algorithms are concerned .recommendation activity often involves two disjoint sets of entities : the user set and the items set .a natural representation of these two groups is by means of bipartite graph , where one set of nodes are items and the other set are users .links go only between nodes of different sets , in case that the user selects the item or rates the item .the rating matrix of collaborative filtering domain therefore could be elegantly represented by this weighed undirected bipartite graph and the original matrix is actually the adjacency matrix of this graph . as for implicit preference matrix ,the generated bipartite graph is unweighted and undirected .the two scenarios are shown in fig .[ fig : bipartite ] ( a ) and ( b ) . for rating prediction problem , we are interested in finding the user neighbors and item neighbors in both modes of the graph simultaneously . for ranking - based recommendation, we are interested in finding the missing edges between the two partite indicating potential interests between user and item . hereafter , we cast the cf problem as bipartite graph mining problem without special explanation .the data in recommender systems keep changing .not only new users and new items are continuously added to the system , the preferences of existing users and the features of existing items are also changing over time . to make better recommendations , algorithms should update the learned model efficiently and appropriately . by `` efficiently ''we mean the algorithm has the ability for fast or even real - time update . on the other hand , ``appropriately '' means we must take time factor into consideration when deciding on what to recommend to the users . for example , we can not expect users are equally satisfied with the recommendations of a very old movie and a brand new movie , even they have similar ratings . in spite of its importance ,time factors gain little attention in recommendation research area until recently .the authors of proposed an incremental update method to compute the proximity of any two given nodes in the bipartite graph changing on the year base . analyzed the evolution of ratings in the netflix movie recommender system .it proposed two cf algorithms that consider time effects : item - item neighborhood method and rating matrix factorization method .both the methods are elaboratively tailored for netflix movie rating data and achieve the best results so far . as pointed out in ,preference evolution is subtle and delicate . for a single user ,what we can use are often just a few preference instances inundated by millions of non - relevant data .the above mentioned methods have been designed and tested for special recommendation scenarios . however , in a more general sense , a robust recommendation algorithm considering time evolution for a wider range of applications including both rating - based and ranking - based recommender systems is still absent and is the main focus of our research .before proceeding on a more detailed description , we first introduce notations and ant colony algorithm which sheds some light on our proposed methods . .notations [ cols="<,<",options="header " , ] for all the algorithms in the above experiments we use batched training data without considering time influence . since the personalize recommendations such as book and music recommendation performances are heavily depending on the time , results can be further improved by updating our iacf model using preference data in their original time order .the iacf with and without considering time sequence are referred to as time and timeless version .we choose 15 time points within the time span , denoted as from to .the results are shown in the fig .[ fig : time ] . from the results we could see a clear increase of precision as time goes on .it means when users keep using our recommender systems , the recommendation will be more and more accurate .recommendation is becoming of more importance for many internet services .we have seen plenty of researches concerning on improving the accuracy of recommendation algorithms on static , rating - based data . but as pointed out in the very recent paper , improved accuracy is not a panacea , there are also other challenges for collaborative filtering .scalability is one of most mentioned concerns and time effects are also a indispensable factor to be considered in dynamic recommender systems . in this paper, we proposed a novel cf algorithm inspired by the ant colony behavior . by pheromone transmission between users and items and evaporation of those pheromones on both users and items over time , acf could flexibly reflect the latest user preference and thus make most reliable recommendations . in a nutshell ,our major contributions are : * it introduced the concepts in ant colony optimizations ( such as pheromone , evaporation and etc . ) into recommendation domain and proposed an incremental , scalable collaborative filtering algorithm that can nicely handle sparse and evolving rating data ; * it fused dimension reduction with the ant collaborative filtering algorithm above mentioned ; * the algorithm proposed in this paper could recommend with both strategies : rating - based and ranking - based recommendation which are used in explicit user preference and implicit user preference scenarios respectively ; * last but not least , acf algorithm is easy to be deployed on distributed computational resources , even in a peer - to - peer environment , which means a higher scalability and more importantly , user privacy protection .there are also some unexplored possibilities to improve the algorithm proposed in this paper .first , the initialization of pheromones does affect the final recommendation results as shown in the dimension reduction version of acf .there may be other initialization schemes that we can make further improvements .second , evaporation is an interesting while hard - to - tune mechanism that applications should find the most suitable rate to their own needs .last but not least , bipartite based ranking methods including ours are flexible and can often fuse user preference data other than user ratings .this is also a promising direction .we thank state key laboratory of computer science for subsidizing our research under the grant number : cszz0808 , the national science and technology major project under grand number:2010zx01036 - 001 - 002 - 2 and the grand project network algorithms and digital information of the institute of software , chinese academy of sciences under grant number : yocx285056 .10 j. wang , s. robertson , a.p .de vries , and m.j.t . reinders .probabilistic relevance ranking for collaborative filtering . in _ information retrieval _11(6 ) : 477 - 497 , 2008 . b. marlin .collaborative filtering : a machine learning perspective .master s thesis , university of toronto , 2004 .aggarwal , j.l .wolf , k .-wu , and p.s .yu . horting hatches an egg : a new graph - theoretic approach to collaborative filtering . in _ proceedings of the fifth acm sigkdd _ : 201 - 212 , 1999 .t. zhou , j. ren , m. medo , and y. zhang .bipartite network projection and personal recommendation . in _physical review e ( statistical , nonlinear , and soft matter physics ) _ , 76(4 ) , 2007 .deng , m.r .lyu , and i. king .a generalized co - hits algorithm and its application to bipartite graphs . in _ proceedings of the 15th acm sigkdd conference on knowledge discovery and data mining _ , 2009 .mirza , b.j .keller , and n. ramakrishnan .studying recommendation algorithms by graph analysis . in _ journal of intelligent information systems _20(2 ) : 131 - 160 , 2003 .p. han , b. xie , f. yang , and r.m .an adaptive spreading activation scheme for performing more effective collaborative recommendation . in _ proceeding of international conference on database and expert systems applications _ : 95 - 104 , 2005 .z. huang , h. chen , and d. zeng .applying associative retrieval techniques to alleviate the sparsity problem in collaborative filtering . in _acm transactions on information systems _22(1 ) : 116 - 142 , 2004 . n. craswell , and m. szummer .random walks on the click graph . in _ proceedings of the 30th annual international acm sigir conference _ : 239 - 246 , 2007 .marina meila , jianbo shi . learning segmentation by random walks . in _ proceedings of advances in neural information processing systems _ : 873 - 879 , 2000 .m. dorigo .optimization , learning and natural algorithms .ph.d.thesis , politecnico di milano , italy , 1992 .j. wu , and k. aberer . swarm intelligent surfing in the web . in _ proceedings of international conference on web engineering _ : 277 - 284 , 2003 .a. das , m. datar , a. garg , and s. rajaram .google news personalization : scalable online collaborative filtering . in _ proceedings of the 16th www conference _: 271 - 280 , 2007 . j. ben schafer , joseph a. konstan , john riedl .e - commerce recommendation applications . datamining and knowledge discovery , vol . 5 ( 1 - 2 ) , 2001:115 - 153 .
|
recommender systems require their recommendation algorithms to be accurate , scalable and should handle very sparse training data which keep changing over time . inspired by ant colony optimization , we propose a novel collaborative filtering scheme : ant collaborative filtering that enjoys those favorable characteristics above mentioned . with the mechanism of pheromone transmission between users and items , our method can pinpoint most relative users and items even in face of the sparsity problem . by virtue of the evaporation of existing pheromone , we capture the evolution of user preference over time . meanwhile , the computation complexity is comparatively small and the incremental update can be done online . we design three experiments on three typical recommender systems , namely movie recommendation , book recommendation and music recommendation , which cover both explicit and implicit rating data . the results show that the proposed algorithm is well suited for real - world recommendation scenarios which have a high throughput and are time sensitive .
|
in order to function and survive in the world , cells must make decisions about the reading out or `` expression '' of genetic information .this happens when a bacterium makes more or less of an enzyme to exploit the variations in the availability of a particular type of sugar , and when individual cells in a multicellular organism commit to particular fates during the course of embryonic development . in all such cases ,the control of gene expression involves the transmission of information from some input signal to the output levels of the proteins encoded by the regulated genes .although the notion of information transmission in these systems usually is left informal , the regulatory power that the system can achieve the number of reliably distinguishable output states that can be accessed by varying the inputs is measured , logarithmically , by the actual information transmitted , in bits . sincerelevant molecules often are present at relatively low concentrations , or even small absolute numbers , there are irreducible physical sources of noise that will limit the capacity for information transmission .cells thus face a tradeoff between regulatory power ( in bits ) and resources ( in molecule numbers ) .what can cells do to maximize their regulatory power at fixed expenditure of resources ?more precisely , what can they do to maximize information transmission with bounded concentrations of the relevant molecules ?we focus on the case of transcriptional regulation , where proteins called transcription factors ( tfs)bind to sites along the dna and modulate the rate at which nearby genes are transcribed into messenger rna . because many of the regulated genes themselves code for tf proteins , regulatory interactions form a network .the general problem of optimizing information flow through such regulatory networks is quite hard , and we have tried to break this problem into manageable pieces . given the signal and noise characteristics of the regulatory interactions , cells can try to match the distribution of input transcription factor concentrations to these features of the regulatory network ; even simple versions of this matching problem make experimentally testable predictions . assuming that this matching occurs, some regulatory networks still have more capacity to transmit information , and we can search for these optimal networks by varying both the topology of the network connections and the strengths of the interactions along each link in the network ( the `` numbers on the arrows '' ) . we have addressed this problem first in simple networks where a single input transcription factor regulates multiple non interacting genes , and then in interacting networks where the interactions have a feedforward structure .but real genetic regulatory networks have loops , and our goal here is to study the simplest such case , where a single input transcription factor controls a single self interacting gene .does feedback increase the capacity of this system to transmit information ?are self activating or self repressing genes more informative ?since networks with feedback can exhibit multistability or oscillation , and hence a nontrivial phase diagram as a function of the underlying parameters , where in this phase diagram do we find the optimal networks ?auto regulation , both positive and negative , is one of the simplest and most commonly observed motifs in genetic regulatory networks , and has been the focus of a number of experiments and modeling studies ( see , for example , refs ) .a number of proposals have been advanced to explain its ubiquitous presence .negative feedback ( self repression ) can speed up the response of the genetic regulatory element , and can reduce the steady state fluctuations in the output gene expression levels .positive feedback ( self activation ) , on the other hand , slows down the dynamics of gene expression and sharpens the response of a regulated gene to its external input .activating genes could thus threshold graded inputs , transforming them into discrete , almost `` digital '' outputs , allowing the cell to implement binary logical functions .if self activation is very strong , it can lead to multistability , or switch like behavior of the response , so that the genetic regulatory element can store the information for long periods of time ; such elements will also exhibit hysteretic effects .weak self activation , which does not cause multistability , has been studied less extensively , but could play a role in allowing the cell to implement a richer set of input / output relations . alternatively ,if the self activating gene product can diffuse into neighboring nuclei of a multicellular organism , the sharpening effect of self activation can compensate for the `` blurring '' of responses due to diffusion and hence open more possibilities for noise reduction through spatial averaging .many of the ideas about the functional role of auto regulation are driven by considerations of noise reduction .the physical processes by which the regulatory molecules find and bind to their regulatory sites on the dna , the operation of the transcriptional machinery , itself subject to thermal fluctuations , and the unavoidable shot noise inherent in producing a small number of output proteins all contribute towards the stochastic nature of gene expression and thus place physical limits on the reliability of biological computation . in the past decadethe advance of experimental techniques has enabled us to measure the total noise in gene expression and sometimes parse apart the contributions of various molecular processes towards this `` grand total '' . with the detailed knowledge about noise in gene expression we can revisit the original question and ask : can both forms of auto - regulation help mitigate the deleterious effects of noise on information flow through the regulatory networks and if so , how ? all of our previous work in information transmission in transcriptional regulation has been in the steady state limit .a similar approach was taken in ref , where the authors analyze information flow in elementary circuits including feedback , but with different model assumptions about network topology and noise .more recently , de ronde and colleagues have systematically reexamined the role of feedback regulation on the fidelity of signal transmission for time varying , gaussian signals in cases where the ( nonlinear ) behavior of the genetic regulatory element can be linearized around some operating point .they found that auto activation increases gain to noise ratios for low frequency signals , whereas auto repression yields an improvement for high frequency signals .while many of the functions of feedback involve dynamics , as far as we know all analyses of information transmission with dynamical signals resort to linear approximations . herewe return to the steady state limit , where we can treat gene regulatory elements as fully nonlinear devices . while we hope that our analysis of the self regulated gene is interesting in itself, we emphasize that our goal is to build intuition for the analysis of more general networks with feedback .figure [ f1 ] shows a schematic of the system that we will analyze in this paper , a gene that is controlled by two regulators : directly by an external transcription factor , as well as in a feedback fashion by its own gene products .we will refer to the transcription factor as the regulatory _ input _ ; its concentration in the relevant ( cellular or nuclear ) volume will be denoted by . in addition , the gene products of , whose number in the relevant volume we denote by and to which we refer to as the _ output _ , can also bind to the regulatory region of , thereby activating or repressing the gene s expression . as we attempt to make our description of this system mathematically precise , the heart of our model will be the _ regulatory function _ that maps the concentrations of the two regulators at the promotor region of to the rate at which output molecules are synthesized .is depicted by a thick black line and a promoter start signal .gene products denoted as blue circles can bind to the regulatory sites ( one in this example ) that control the expression of .direct control over the expression of is exerted by molecules of the transcription factor ( green diamonds , two binding sites ) . ]we can write the equation for the dynamics of gene expression from by assuming that synthesis and degradation of the gene products are single kinetic steps , in which case we have here , is the maximum rate for production of , is the protein degradation time , and is the concentration of the output molecules in the relevant volume . to include the noise effects inherent in creating and degrading single molecules of we introduce the langevin force , andwe will discuss the nature of this and other noise sources in detail later .importantly , departures from our simplifying assumptions about the kinetics can , in part , be captured by proper treatment of the noise terms , as discussed below .we are interested in the information that the steady state output of provides about the input concentration .following our previous work , we address this problem in stages .first we relate information transmission to the response properties and noise in the regulatory element , using a small noise approximation to allow analytic progress ( section [ infsecta ] ) .then we show how the relevant noise variances can be computed from the model in eq ( [ eqd ] ) , taking advantage of our understanding of the physics underlying the essential sources of noise ( section [ insectb ] ) ; this discussion is still quite general , independent of the details of the regulation function .then we explain our choice of the regulation function , adapted from the monod changeux description of allosteric interactions ( section [ mwc ] ) .because feedback allows for bifurcations , we have to map the phase diagram of our model ( section [ phasediag ] ) , and develop approximations for the information transmission near the critical point ( section [ critinfo ] ) and in the bistable regime ( section [ infobi ] ) .our discussion reviews some earlier results , in the interest of being self contained , but the issues in sections [ phasediag][infobi ] are all new to the case of networks with feedback .we are interested in computing the mutual information between the input and the output of a regulatory element , in steady state .we have agreed that the input signal is the concentration of the transcription factor , and we will take the output to be the concentration of the gene products , which we colloquially call the expression level of the gene .an important feature of the information transmission is that its mathematical definition is independent of the units that we use in measuring these concentrations , so when we later choose some natural set of units we wo nt have to worry about substituting into the formulae we derive here . following shannon ,the mutual information between and is defined by \ , { \rm bits } , \label{isym}\ ] ] where input concentrations are drawn from the distribution , the output expression levels that we can observe are drawn from the distribution , and the joint distribution of these two quantities is .we think of the expression level as responding to the inputs , but this response will be noisy , so given the input there is a conditional distribution . then the symmetric expression for the mutual information in eq ( [ isym ] ) can be rewritten as a difference of entropies , - \int dc\ ; p_{\rm in}(c ) s[p(g|c ) ] , \label{info_ent}\ ] ] where the entropy of a distribution is defined , as usual , by = -\int dx\ , p(x ) \log_2 p(x ) .\ ] ] finally , we recall that notice that the mutual information is a functional of two probability distributions , and .the latter distribution describes the response and noise characteristics of the regulatory element , and is something we will be able to calculate from eq ( [ eqd ] ) .following refs , we may then ask : given that is determined by the biophysical properties of the genetic regulatory element , what is the optimal choice of that will maximize the mutual information ? to this end we have to solve the problem of extremizing = i(c;g ) - \lambda\int dc\ ; p_{\rm in } ( c ) , \label{extremal}\ ] ] where the lagrange multiplier enforces the normalization of .other `` cost '' terms are possible , such as adding a term proportional to , which would penalize the average cost of input molecules , although here we take the simpler approach of fixing the maximum possible value of , which is almost equivalent . if the noise were truly zero ,we could write the distribution of outputs as , \ ] ] where is the average output as a function of the input , i.e. the mean of the distribution .then if the function is invertible , we can write the entropy of the output distribution as & \equiv & -\int dg\ , p_{\rm out}(g)\log_2 p_{\rm out}(g)\nonumber\\ & \rightarrow & -\int dc\ , p_{\rm in}(c ) \log_2\left [ p_{\rm in}(c ) { \bigg | } { { d\bar g ( c)}\over { dc}}{\bigg |^{-1}}\right ] , \end{aligned}\ ] ] and we can think of this as the first term in an expansion in powers of the noise level . keeping only this leading term , we have = -\int dc\ , p_{\rm in}(c ) \log_2\left [ p_{\rm in}(c ) { \bigg | } { { d\bar g ( c)}\over { dc}}{\bigg |^{-1 } } \right ] - \int dc\ , p_{\rm in}(c ) s[p(g|c ) ] - \lambda\int dc\ ; p_{\rm in } ( c ) , \label{pointback}\ ] ] and one can then show that the extremum of occurs at }\left| \frac{d\bar{g}(c)}{dc}\right| , \label{gensol}\ ] ] where the entropy is measured in bits , as above , and the normalization constant } .\ ] ] the maximal value of the mutual information is then simply .in the case where is gaussian , , \ ] ] the entropy is determined only by the variance , ={1\over 2 } \log_2\left [ 2\pi e \sigma_g^2(c)\right ] .\ ] ] it is useful to think about propagating this ( output ) noise variance back through the input / output relation , to define the effective noise at the input , then we can write as before , is the normalization constant , ^{1/2 } , \label{eq2}\ ] ] where is the maximal value of the input concentration , and again we have the information . equation ( [ eq2 ] ) relates , and hence the information transmission , to the steady state response and noise in our simple regulatory element .these quantities are calculable from the dynamical model in eq ( [ eqd ] ) , if we understand the sources of noise .there are two very different kinds of noise that we need to include in our analysis .first , we are describing molecular events that synthesize and degrade individual molecules , and individual molecules behave randomly .if we say that there is synthesis of molecules per second on average , then if the synthesis is limited by a single kinetic step , and if all molecules behave independently , then the actual rate will fluctuate with a correlation function . similarly ,if on average there is degradation of molecules per second , then the actual degradation rate will fluctuate with . thus if we want to describe the time dependence of the number of molecules , we can write where if we are close to the steady state , , and if synthesis and degradation reactions are independent , we have if some of the reactions involve multiple kinetic steps , or if the molecules we are counting are amplified copies of some other molecules , then the noise will be proportionally larger or smaller , and we can take account of this by introducing a `` fano factor '' , so that for more about the langevin description of noise in chemical kinetics , see ref .the second irreducible source of noise is that the synthesis reactions are regulated by transcription factor binding to dna , and these molecules arrive randomly at their targets .one way to think about this is that the concentrations of tfs which govern the synthesis rate are not the bulk average concentrations over the whole cell or nucleus , but rather concentrations in some small `` sensitive volume '' determined by the linear size of the targets themselves . concretely ,if we write the synthesis rate as where is the local concentration of the input transcription factor and is the concentration of the gene product that feeds back to regulate itself , we should really think of these concentrations as and , where we separate the mean values and the local fluctuations ; note that the mean gene product concentration is the ratio of the molecule number to the relevant volume .the local concentration fluctuations are also white , and the spectral densities are given accurately by dimensional analysis , so that where is the diffusion constant of the transcription factor molecules , which we assume is the same for the input and output proteins .we can put all of these factors together if the noise is small , so that it drives fluctuations which stay in the linear regime of the dynamics .then if the steady state solution to eq ( [ eqd ] ) in the absence of noise is denoted by , we can linearize in the fluctuations : \delta g + \xi_{\rm eff}(t ) , \,{\rm where } \label{ll2}\\ \langle \xi_{\rm eff}(t ) \xi_{\rm eff}(t')\rangle & = & 2 \left [ \nu \frac{\bar g}{\tau } + \left ( r_{\rm max } { { \partial f(c,\gamma)}\over{\partial \gamma}}{\bigg |}_{\gamma = \bar g /\omega}\right)^2\frac{2\bar g}{\omega d \ell } + \left ( r_{\rm max } { { \partial f(c,\bar g /\omega)}\over{\partial c}}\right)^2 \frac{2c } { d \ell } \right ] \delta(t - t ' ) \label{ll3}.\end{aligned}\ ] ] to solve this problem and compute the variance in the output number of molecules , it is useful to recall the langevin equation for the position of an overdamped mass tethered by a spring of stiffness , subject to a drag force proportional to the velocity , : from equipartition we know that these dynamics predict the variance . identifying terms with our langevin description of the synthesis and degradation reactions , we find , \ ] ] where we understand that the partial derivatives of are to be evaluated at the steady state .we have defined as the maximum synthesis rate , so that the regulation function is in the range , and hence the maximum mean expression level is .thus it makes sense to work with a normalized expression level , and to think of the regulation function as depending on rather than on the absolute concentration .then we have , \ ] ] where is the maximal mean concentration of output molecules .as discussed previously , we can think of as the maximum number of independent output molecules , and this combines with the other parameters in the problem to define a natural concentration scale , . once we choose units where , we have a simpler expression , , \ ] ] where we notice that almost all the parameters have been eliminated by our choice of units .finally , we need to use the variance to compute the information capacity of the system , , where from eq ( [ eq2 ] ) we have ^{1/2 } = \left [ \frac{n_g}{2\pi e}\right]^{1/2}\tilde z,\\ \tilde z & = & \int_0^cdc\;\left [ \left(\frac{d\bar{g}}{dc}\right)^2 \frac{1 - ( \partial f/\partial g ) } { \bar{g } + ( \partial f/\partial g ) ^2 ( \bar{g}/\gamma_{\rm max } ) + ( \partial f/ \partial c ) ^2 c } \right]^{1/2 } ; \end{aligned}\ ] ] is the maximum concentration of input transcription factor molecules , in units of .notice that the parameter just scales the noise and ( in the small noise approximation ) thus adds to the information , ; the problem of optimizing information transmission thus is the problem of optimizing .further , because the total derivative can be expressed though putting all of these pieces together , we find in what follows we will start with the assumption that , since both input and output molecules are transcription factor proteins , their maximal concentrations are the same , and hence ; we will return to this assumption at the end of our discussion . if a regulatory function is chosen from some parametric family , eq ( [ cap1 ] ) allows us to compute the information transmission as a function of these parameters and search for an optimum . before embarking on this path, however , we note that the integrand of can have a divergence if .this is a condition for the existence of a _critical point _ , and in this simple system the critical point or bifurcation separates the regime of monostability from the regime of bistability .we expect that at this point the fluctuations around are no longer gaussian , and we need to compute higher order moments .thus , eq ( [ cap1 ] ) , as is , can safely be used only in the monostable regime away from the critical point ; in section [ critinfo ] we compute the expression for the mutual information near to and at the critical point for a particular choice of .there are even more problems in the bistable regime , since there are multiple solutions to eq ( [ ss ] ) , and in section [ infobi ] we discuss information in the bistable regime . to continue, we must choose a regulatory function . in ref , where we analyzed genetic networks with feedforward interactions , we studied hill type regulation and monod wyman changeaux like ( mwc ) regulation , and found that the mwc family encompasses a broader set of functions than the hill family ; for a related discussion see ref .mwc functions also allow for a natural introduction of convergent control , where a node in a network is simultaneously regulated by several types of regulatory molecules . briefly, in the mwc model one assumes that the molecule or supermolecular complex being considered has two states , which we identify here with on and off states of the promoter .the binding of each different regulatory factor is always independent , but the binding energies depend on whether the complex is in an off or on state , so that ( by detailed balance ) binding shifts the equilibrium between these two states . in our case, we have two regulatory molecules , the input transcription factor with concentration and the gene product with concentration .if there are , respectively , and identical binding sites for these molecules then the probability of being in the on state is where are the binding constants in the on state , and similarly are the binding constants in the off state ; reflects the `` bare '' free energy difference between the two states .if the binding of the regulatory molecules has a strong activating effect , then we expect , and similarly for , which means that only one binding constant is relevant for each molecule , and we will refer to these as and . then we can write where is the input concentration at which .notice that if binding of strongly represses the gene , then we have , but this can be simulated by changing the sign of .thus we should think of the parameters and as being not just the number of binding sites , but also an index of activation vs. repression .we will also treat these parameters as continuous , which is a bit counterintuitive but allows us to describe , approximately , situations in which the multiple binding sites are inequivalent , or in which and are not infinitely different . from the discussion in the previous section, we will need to evaluate the partial derivatives of with respect to it arguments .for the mwc model , these derivatives take simple forms : let us start by examining the stability properties of eq ( [ ss ] ) , which determines the steady state . viewed as a function of , is sigmoidal , and so if we try to solve graphically we are looking for the intersection of a sigmoid with the diagonal , as a function of . in doing thiswe expect that , for some values of the parameters , there will be exactly one solution , but that as we change parameters ( or the input ) , there will be a transition to multiple solutions .this transition happens when just touches the diagonal , that is , when for some it holds true that and . using eq ( [ g_derivative ] ) , these two conditions can be combined to yield an equation for : this is a quadratic in for which no real solution on $ ] exists if either or .when either of these conditions are fulfilled , the gene is in the monostable regime . at the critical point , and is illustrated in fig [ f - bistableregion ] , as a function of the effective input , where , from eq ( [ f_def ] ) , . for the special case of is not hard to compute the analytical approximations for the boundary of the bistable domain .first , eq ( [ mwc ] ) can be expanded for large , yielding a quadratic equation for that has two solutions only when to get the lower bound , we expand eq ( [ mwc ] ) for small and retain terms up to the quadratic order in ; the resulting quadratic equation yields two solutions only if both approximations are plotted as circles and crosses , respectively , in fig [ f - bistableregion ] , and match the exact curves well . for other values of we solve eq ( [ ss ] ) exactly , using a bisection method to get all solutions for a given and we partition the range of adaptively into a denser grid where the derivative is large . for integer values of the equation can be rewritten as a polynomial in , it is technically easier to find the roots of the polynomial ; alternatively one can solve for given using a simple bisection , because is an injective function . . *a ) * the phase diagram as a function of and the input - dependent term . in the region between the black solid lines two solutions exist for every value of the input ( y - axis ) .the corresponding critical value is at ( cusp of the black solid lines ) .circles and crosses represent analytical approximations to the exact boundary of the bistable region for and large ( see text ) . for three choices of denoted by vertical dashed lines ,the input / output relations are plotted in b. * b)*. the critical solution ( red ) has an infinite total derivative at , .the bistable system ( blue ) has three solutions , two stable and one unstable , for a range of inputs that can be read out from the plot in a. , width=192 ] in this section we will generalize the computation of noise and information in the region close to the critical point , where the gaussian noise approximation breaks down .we start by rewriting eq s ( [ ll1][ll2 ] ) in our normalized units , .\end{aligned}\ ] ] this is equivalent to brownian motion of the coordinate in a potential defined by with an effective temperature that varies with position .if we simulate this langevin equation , we will draw samples out of the distribution , but we can construct this distribution directly by solving the equivalent diffusion or fokker planck equation , + \frac{\partial^2 } { \partial g^2}\left[t(g)p(g , t)\right];\ ] ] the steady state solution is then , and this is .\label{intg}\ ] ] the `` small noise approximation '' in this extended framework corresponds to expanding the integrand in eq ( [ intg ] ) around the mean , .if we write , we will find where our previous approximations correspond to keeping only .the critical point is where , and we have to keep higher order terms . in principlethe expansion coefficients have contributions from the of the effective temperature , but we have checked that these contributions are negligible near criticality .then we have , \label{eqset}\\ a_3 & = & -\frac{1}{3 ! }\frac{f''}{t } , \nonumber\\ a_4 & = & -\frac{1}{4!}\frac{f'''}{t } , \nonumber\end{aligned}\ ] ] where primes denote derivatives with respect to , and all terms should be evaluated at . for the monod wyman changeaux regulatory function in eq ( [ mwc ] ) , all these derivatives can be evaluated explicitly : \label{dfdg3}\end{aligned}\ ] ] from eq ( [ ccrit ] ) , the critical point occurs at when , and at this point the derivatives simplify : now we want to explore behavior in the vicinity of the critical point ; we will fix to its critical value , , and compute the derivatives in eqs ( [ dfdg1]-[dfdg3 ] ) as . consider therefore a small positive such that in a system with chosen and that yield critical behavior , the deviation from criticality above will happen at . to find the relation between and , we evaluate the derivative in eq ( [ dfdg1 ] ) at to form a function , which evaluates to 1 at .this function can be expanded in taylor series around ; the first order in vanishes and we find : therefore , the derivative deviates by from criticality at 1 when deviates by from the .we now perform similar expansions on the second- and third - order derivatives in eqs ( [ dfdg2],[dfdg3 ] ) , and evaluate the factors at the critical point : these expressions have been evaluated for , but we could have easily repeated the calculation by assuming that itself can deviate a bit from the critical value , i.e. , which would yield somewhat more complicated results that we do nt reproduce here .equations ( [ perturb1][perturb3 ] ) can be used in eq ( [ eqset ] ) to write down the probability distribution . far away from the critical pointthe gaussian approximation is assumed to hold , and can be set to 0 .close to the critical point the higher order terms and need to be included . to assess the range where this switchover occurs, we compare in eqs ( [ perturb1][perturb3 ] ) the leading to the subdominant correction : we insist that the quadratic correction in eq ( [ perturb2 ] ) is always smaller than linear , and that the linear correction in eq ( [ perturb3 ] ) is always smaller than constant ( we drop the quadratic correction there ) .we found empirically that including the higher - order corrections yields good results when the following conditions are simultaneously satisfied : in eq ( [ gensol ] ) , together with the quartic ansatz for in eq ( [ pquartic ] ) .for each , we evaluate two entropies of the conditional distribution : & = & \log_2\sqrt{2\pi e\sigma_g^2(c ) } \label{ngauss } \\s_4[p(g|c ) ] & = & -\int dg \;p(g|c ) \log_2 p(g|c ) . \label{nquartic}\end{aligned}\ ] ] is the noise entropy with higher - order terms included whenever conditions eq ( [ cond ] ) are met , and is the noise entropy in the gaussian approximation . equation ( [ gensol ] ) can be rewritten in a numerically stable fashion by realizing that , that is , that the optimal distribution of mean output levels is given by }/z .\label{cap2}\ ] ] to join the gaussian and higher - order approximations consistently in the regimes away and near the critical point , the noise entropy in eq ( [ cap2 ] ) is chosen to be the pointwise minimum of and .finally , the information is again , with }. \label{cap2}\ ] ] we now discuss the information capacity in the bistsable regime , away from the critical line . in this regime, each value of the input can give rise to multiple solutions of the steady state equation , eq ( [ ss ] ) .in the simplest case ( which includes the mwc regulatory functions ) , there will be two stable solutions , and , and a third solution , , that is unstable . in equilibrium, the system will be on the first branch with weight and on the second with weight .here we place upper bound on the information , again in the small noise approximation .this will be useful since , as we will see , even this upper bound is always less than the information which can be transmitted in the monostable or critical regime , and so we will be able to conclude that the optimal parameters for which we are searching are never in the bistable regime . in the bistable regime , the small noise approximation ( again , away from the critical line ) means that the conditional distributions are well approximated by a mixture of gaussians , to compute the information we need two terms , the total entropy and the conditional entropy .the conditional entropy takes a simple form if we assume the noise is small enough that the gaussians do nt overlap .then a direct calculation shows that , as one might expect intuitively , the conditional entropy is just the weighted sum of the entropies of the gaussian distributions , plus a term that reflects the uncertainty about which branch the system is on , & = & { 1\over 2}\sum_{{\rm i}=1}^2 w_{\rm i}(c ) \log_2 \left [ 2\pi e \sigma_{\rm i}^2 ( c)\right ] \nonumber\\ & & \,\,\,\,\,\,\,\,\,\ , - \sum_{{\rm i}=1}^2 w_{\rm i}(c ) \log_2 w_{\rm i}(c ) .\end{aligned}\ ] ] implementing the small noise approximation for the total entropy is a bit more subtle .we have , as usual , \nonumber\\ & & \,\,\,\,\,\,\,\,\,\,+ \int dc\ , p(c ) { { w_2(c)}\over\sqrt{2\pi\sigma_2 ^ 2(c ) } } \exp\left [ - { { ( g - \bar g_2 ( c))^2}\over{2\sigma_2 ^2(c ) } } \right ] .\nonumber\\ & & \end{aligned}\ ] ] if the noise is small , each of the two integrals is dominated by values of near the solution of the equation ; let s call these solutions .notice that these solutions might not exist over the full range of , depending on the structure of the branches .nonetheless we can write {c = \hat c_1(g)}\nonumber\\ & & \,\,\,\,\,\,\,\,\,\,+ \left [ w_2 ( c)p(c ) { \bigg | } { { d\bar g_2 ( c)}\over { dc } } { \bigg |}^{-1}\right]_{c = \hat c_2(g ) } , \end{aligned}\ ] ] with the convention that if we try to evaluate at a non existent value of , we get zero .thus , the full distribution is also a mixture , the fractional contributions of the two distributions are where denotes an integral over the regions along the axis where the function exists , and the ( normalized ) component distributions are {c = \hat c_{\rm i}(g)}\ ] ] the entropy of a mixture is always less than the average entropy of the components , so we have an upper bound & \leq & -\sum_{{\rm i}=1}^2 f_{\rm i}\int dg\ , p_{\rm i}(g ) \log_2 p_{\rm i}(g)\\ & = & -\sum_{{\rm i}=1}^2 \int dg\ , \left [ w_{\rm i } ( c)p(c ) { \bigg | } { { d\bar g_{\rm i } ( c)}\over { dc } } { \bigg |}^{-1}\right]_{c = \hat c_{\rm i}(g)}\log_2 \left [ { 1\over { f_{\rm i } } } w_{\rm i } ( c)p(c ) { \bigg | } { { d\bar g_{\rm i } ( c)}\over { dc } } { \bigg |}^{-1}\right]_{c = \hat c_{\rm i}(g)}\\ & = & -\sum_{{\rm i}=1}^2 \int_{\rm i } dc\ , p(c ) w_{\rm i}(c ) \log_2 \left [ { 1\over { f_{\rm i } } } w_{\rm i } ( c)p(c ) { \bigg | } { { d\bar g_{\rm i } ( c)}\over { dc } } { \bigg |}^{-1}\right ] .\end{aligned}\ ] ] an upper bound on the total entropy is useful because it allows us to bound the mutual information : - \int dc\ , p(c ) s[p(g|c)]\\ & \leq & -\sum_{{\rm i}=1}^2 \int_{\rm i } dc\ , p(c ) w_{\rm i}(c ) \log_2 \left [ { 1\over { f_{\rm i } } } w_{\rm i } ( c)p(c ) { \bigg | } { { d\bar g_{\rm i } ( c)}\over { dc } } { \bigg |}^{-1}\right ] \nonumber\\ & & \,\,\,\,\,\,\,\,\,\,- { 1\over 2}\sum_{{\rm i}=1}^2 \int dc\ , p(c ) w_{\rm i}(c ) \log_2 \left [ 2\pi e \sigma_{\rm i}^2 ( c)\right ] + \sum_{{\rm i}=1}^2 \int dc\ , p(c ) w_{\rm i}(c ) \log_2 w_{\rm i}(c)\\ & = & -\int dc\ , p(c ) \log_2 p(c ) + { 1\over 2}\int dc\ , p(c ) \sum_{{\rm i}=1}^2 w_{\rm i}(c ) \log_2 \left [ { \bigg | } { { d\bar g_{\rm i } ( c)}\over { dc } } { \bigg |}^2 { 1\over{2\pi e \sigma_{\rm i}^2 ( c)}}\right ] - \left ( - \sum_{{\rm i}=1}^2 f_{\rmi } \log_2 f_{\rm i } \right)\\ & \leq & -\int dc\ , p(c ) \log_2 p(c ) + { 1\over 2}\int dc\ , p(c ) \sum_{{\rm i}=1}^2 w_{\rm i}(c ) \log_2 \left [ { \bigg | } { { d\bar g_{\rm i } ( c)}\over { dc } } { \bigg |}^2 { 1\over{2\pi e \sigma_{\rm i}^2 ( c)}}\right ] , \label{ibound_n}\end{aligned}\ ] ] where in the last step we use the positivity of the entropy associated with the mixture weights .we can now ask for the probability distribution that maximizes the upper bound on , and in this way we can bound the capacity of the system .happily , the way in which the bound depends on , in eq ( [ ibound_n ] ) , is not so different from the dependencies that we have seen in the monostable case [ eq ( [ pointback ] ) ] , so we can follow a parallel calculation to show that \\ \phi(c ) & = & \sum_i w_i(c ) \ln \left[\sqrt{2 \pi e \sigma^2_{g_i}(c ) } \left|\frac{d \bar{g}_i(c)}{d c } \right|^{-1 } \right ] .\label{multii}\end{aligned}\ ] ] finally , to find the weights we can numerically integrate the fokker planck solution in eq ( [ intg ] ) to find to summarize , we have derived an upper bound on the information transmitted between the input and the output . the tightness of the bound is related to the applicability of the `` no overlap '' approximation , which for mwc like regulatory functions should hold very well , as we have verified numerically . if only one of the weights , our results reduce to those in the monostable case , as they should .we begin by showing that the analytical calculations presented in the previous section can be carried out numerically in a stable fashion , both away from and in the critical regime .we recall that the information transmission is determined by an integral [ eq ( [ cap1 ] ) ] , and that because we are working in the small noise approximation we have a choice of evaluating this as an integral over the input concentration or an integral over the mean output concentrations .figure [ f-3 ] shows the behavior of the integrands in these two equivalent formulations when we have chosen parameters that are close to the critical point in a self activating gene .the key result is that , once we include terms beyond the gaussian approximation to following the discussion in section [ critinfo ] , we really do have control over the calculation , and find smooth results as the parameter values approach criticality .thus , we can compute confidently , and search for parameters of the regulatory function that maximize information transmission . for , and , showing a self - activating gene with an almost critical value of . *b ) * noise in the input from eq ( [ sigmac ] ) , which is the integrand in the expression for information in eq ( [ cap1 ] ) , shows an incipient divergence between red vertical bars .inset : zoom - in of the peak shows that it can be sampled well by increasing the number of bins .different plot symbols indicate domain discretization into 500 ( black dots ) and 5000 ( red circles ) bins . at the critical pointthe divergence is hard to control numerically .* c ) * an alternative way of computing the same information , by integrating in the output domain as in eq ( [ cap2 ] ) . shown is the integrand in the gaussian noise approximation [ black , from eq ( [ ngauss ] ) ] and with quartic corrections [ red , from eq ( [ nquartic ] ) ] . at the critical pointhigher order corrections regularize the integrand , while away from the critical point the integrand smoothly joins with the gaussian approximation .this approach is stable numerically both away from and in the critical regime .* d ) * information as a function of for .critical is denoted by a dashed red line .integration across in the output domain with quartic corrections ( squares ) agrees well with the integration across in the input domain ( crosses ) away from , but also smoothly extends to the critical .this is a cut across the capacity plane in fig [ f-4]a ( denoted by a dashed yellow line ) for . ]we start the optimization by choosing the parameter values which describe the self interaction term , and then holding these fixed while we optimize the remaining ones , . in all these optimizations the parameter is driven to zero , and in this limit the mwc regulatory function of eq ( [ mwc ] ) simplifies to something more like the hill function , once we have optimized , we can explore the information capacity as a function of , at varying values of the remaining parameter in the problem , the maximal concentration of transcription factors .figure [ f-4]a maps out the `` capacity planes , '' at fixed . in detail , we show for three choices of our parameter , where is the information obtained with the optimal choice of and ; the best choice of parameters is depicted as a yellow circle in the capacity plane . for large values of , ,the optimal solution is at or ( magenta square in the lower right corner ) , which drives the self activation term in eq ( [ mwc ] ) to zero , towards a noninteracting solution .we have checked that these solutions correspond to optimal solutions for a single noninteracting gene found in our previous work . as is decreased , however , the optimal combination shifts towards the left in the capacity plane ( cyan square for ) , exhibiting a shallow but distinct maximum in information transmission .if we examine the mean input / output relations in fig [ f-4]b , we find nothing dramatic : the critical ( red ) solutions seem to have lower capacities ( which we carefully reexamine blow ) , while other quite distinct parameter choices for nevertheless generate very similar mean input / output relations , because of the freedom to optimally choose parameters .the behavior of effective noise in the input , , given by eq ( [ sigmac ] ) and shown in fig [ f-4]c , is more informative ; recall that is proportional to the information transmission .noninteracting ( magenta ) solutions always have the lowest amount of noise at high input concentrations ( ) .as the self interaction turns on , the noise at high input increases , but that increase can be traded off for a decrease in noise at medium and low .while for low the critical ( red ) solution is never optimal , the solution with some self activation manages to deliver an additional bits of information .we have verified that for smaller value of the capacity plane is qualitatively the same , exhibiting the peak at a nontrivial ( but still not critical ) choice of ( not shown ) .intuitively , the self - activation parameters have three direct effects on the information transmission : they change the shape of the input / output curve , the self activation feeds some of the output noise back into the input , and the time ( protein lifetime ) that averages the input noise component gets renormalized to .the changes in the mean input / output relation can be partially compensated for by the correlated changes in the , as we observed in our optimal solutions , suggesting that regardless of the underlying microscopic parameters , it is the shape of itself that must be optimal . the increase in averaging time acts to increase the information , thus favoring self activation .however , this will simultaneously increase the noise in the output that feeds back , as well as drive towards infinite steepness at criticality , restricting the dynamic range of the output . at low is a parameter regime where increasing the integration time will help decrease the ( dominant ) input noise enough to result in a net gain of information . at high , input noise is minimal and thus this averaging effect loses its advantage ; instead , feedback simply acts to increase the total noise by reinjecting the output noise at the input , so that optimizing information transmission drives the self interaction to zero .next we examine in detail the behavior of information transmission close to the critical region . close to , but not at , the critical point we perform very fine discretization of the input range to evaluate the integral in eq ( [ cap1 ] ) , as reported in fig [ f-3]b . to validate that the information indeed reaches a maximum at nontrivial values of when , we cut through the capacity plane in fig [ f-4]a along the yellow line at , and display the resulting capacity values in fig [ f-5]a ( the results are numerically stable when integrated on or points ) .unlike for and , for the maximum is clearly achieved for a nontrivial value of , but away from the critical line , confirming our previous observations .we further examine the capacity directly on the critical line , , as a function of at ( denoted in fig [ f-4]a with dashed red line ) . the capacity in this casecan be calculated using eq ( [ cap2 ] ) and is shown in fig [ f-5]b .the capacity that includes quartic corrections is higher by bits than in the gaussian approximation , making the effect small but noticeable .we also confirmed that the capacity at the critical line joints smoothly with the capacity near the line , i.e. that there is no jump in capacity exactly at criticality , which presumably would be a sign of numerical errors .figure [ f-5]c finally validates that across the whole range of for , small increases in above the critical value always lead to an increase of information , demonstrating that the maximum is _ not _ achieved on the critical line .are optimized as a function of the maximal input concentration .the self - interacting system ( red ) allows for an arbitrary mwc - like regulatory function [ eq ( [ mwc ] ) ] with parameters .the noninteracting system ( black ) only has the mwc parameters and the leak ( see ref ) which can be reexpressed in terms of .bright red line with circles shows self - activating solutions which are optimal for , while dark red line with crosses shows self - repressing solutions , optimal for .plotted on the secondary vertical axis in green is the ratio between the self - interacting contribution to , and the input contribution to in the expression for the mwc regulatory function [ eq ( [ mwc ] ) ] . for where the interacting and noninteracting solution join ,this term falls to 0 , as expected . ]we next turn to the joint optimization of all parameters and plot the information transmission as a function of in fig [ f-6 ] .as we have discussed , optimization drives the strength of self activation to zero for ( but see below for self repression ) , and at these high values of the result of full optimization coincides with the non interacting case . as falls below one , the gain in information due to self activationis increased , reaching a significant value of about a bit for .as we have noted in section [ mwc ] , the self activating effect of on its own expression can be changed into a self repressing effect by simply flipping the sign of the parameter . to explore the optimization of such self repressing genes , we thus optimized the parameters as before , now constraining .results in plane are shown in fig [ f-7 ] , for and . for ( left column ) and ( right column ) ; plot conventions are the same as in fig [ f-4 ] .* a ) * the capacity decrease from the maximum value ( achieved at the parameter choice indicated by a yellow circle ) as a function of .the maximum information transmission is achieved for a non - interacting case ( magenta ) , for .in contrast , for , there is a non - trivial optimum for small values of and ( red ) . *b ) * the mean input / output solutions for three example systems from a ( red , magenta , cyan ) .* c ) * the effective noise in the input , , for example solutions in a. ] we find that , for large , the optimization process drives both and toward zero , so that the effective input / output relation is given by with nonzero values of being optimal . why is self repression optimal at large , when self activation is not ?repression suppresses noise at high concentrations of the input ( red vs magenta curves fig [ f-7]c for ) and allows the mean input / output curve to be more linear than in the non interacting case ( fig [ f-7]b ) , extending the dynamic range of the response .both these effects serve to increase information transmission .it is remarkable that when we put together the self activating and self repressing solutions , we see that they join smoothly at ( fig [ f-6 ] ) : self activation is optimal for , and self repression is optimal for , while precisely at the system that transmits the most information is non interacting .all of this discussion has been in the limit where the maximal concentration of output molecules , , is the same as the maximal concentration of input molecules , , so there is only one parameter that governs the structure of the optimal solution .this makes sense , at least approximately , since both input and output molecules are transcription factors , presumably with similar costs , but nonetheless we would like to see what happens when we relax this assumption . intuitively ,if we let become large , the system can achieve the advantages of feedback while the impact of noise being fed back into the system should be reduced .if we look at eq ( [ cap1 ] ) for , which controls the information capacity , we can take the limit to find now the only place where feedback plays an explicit role is in the term , which comes from the lengthening of the integration time , which in turn serves to average out the noise in the system .all other things being equal ( which may be hard to arrange ) , this suggests that information transmission will be maximized if the system approaches the critical point , where .the difficulty is that the system ca nt stay at the critical point for all values of the input , so there must be a tradeoff between lengthening the integration time and using the full dynamic range . to explore more quantitatively ,we treat as a parameter . when is small , we know that self activation is important , and in this regime we see from fig [ f-8 ] that changing matters . on the other hand , for large values of we know that ( at ) optimization drives self activation to zero , so we expect that there is less or no impact of allowing .we also see that , for a fixed small , increasing drives the system closer towards the signatures of criticality nonmonotonic behavior in the noise and a steepening of the input / output relation . in more detail , we can plot the value of as a function of and ,that is , check for each of the solutions in fig [ f-8]a how close the partial derivative comes to 1 , which is a direct measure of criticality .we confirm that , for the simultaneous choice of small and large , we indeed have . in the extreme , if we choose and , we find that the optimal and are driven towards small values ( but since are small is not negligible ) ; the optimal . with this value of , the corresponding critical value for would be , and the numerically found optimal value in our system is .the critical value for would be , and indeed at this small value the optimal mean input / output relation has a strong kink , the effective noise has a sharp dip and at this point climbs to 0.9936 .numerically , therefore , we have all the expected indications of emerging criticality at very large . for less extreme values, we expect the optimum to result from the interplay between the input and transmitted noise contributions , which in general need not be on the critical line . .* a ) * for various choices of indicated on the plot , the information transmission with the optimal choice with respect to all parameters is shown as a function of .two special systems of interest ( blue , green ) are chosen for the lowest value of . *b ) * the mean input / output relation for the blue and green system . the green system has a higher transmission , a steeper activation curve but a smaller dynamic range .* c ) * the effective noise in the input , , for the blue and green systems .the green system is closer to critical at the point where the mean input / output curve has the highest curvature and the noise exhibits a dip . ] to complete our exploration of the optimization problem , we have to consider parameter values for which the output has two locally stable values given a single input .quantitatively , in the bistable regime we have to solve for both stable solutions , with , and for the unstable branch .we can then evaluate the equilibrium probabilities of being on either of the stable branches using eq ( [ weights ] ) , and use eq ( [ multii ] ) to compute the capacity . as shown in an example in fig [ f-9]a, we never find the optimal solutions in the bistable region the capacity starts decreasing after crossing the critical line . consistently with our argument that output and feedback noise must become negligible for the regime of small and large , we find that optimization drives the system towards achieving maximal transmission closer and closer the the critical line ( which is approached from the monostable side ) , as shown in fig [ f-9]b .to summarize , we have analyzed in detail a single , self interacting genetic regulatory element .as in previous work , we based our analysis on three assumptions : ( i ) that the readout of the information between the input and output happens in steady state , ( ii ) that noise is small ; and ( iii ) that the constraint limiting the information flow is the finite number of signaling molecules . in addressing a system with feedback , assumption ( ii )requires technical elaboration near the critical point , as discussed above . but( i ) requires a qualitatively new discussion for systems with feedback , because of the possibility of multistability . at fixed , with the optimal choice of parameters , for three values of ( dark to bright red , respectively ) .dots show capacity calculation using the bistable code that can handle multiple branches using eq ( [ multii ] ) , solid line uses the monostable integration as in eq ( [ cap1 ] ) . *b ) * optimal capacity at very high ratios for different value of ( : circles , crosses , squares , stars , respectively ) . the optimumis pushed towards the critical line from the monostable side for large and small . in all cases ,information in the bistable regime is smaller than in the monostable regime . ]our analysis , with the steady state assumption , shows that truly bistable systems do not maximize the information .intuitively , this stems from the branch ambiguity : for a given input concentration a bistable system can sit on either one of the stable branches with some probability , and this uncertainty contributes to the noise entropy , thereby reducing the transmitted information . butreaching steady state involves waiting for two very different processes .first , the system reaches a steady distribution of fluctuations in the neighborhood of each stable point , and then the populations of the two stable states equilibrate with one another . as with brownian motion in a double well potential ( or a chemical reaction ) , these two processes can have time scales that are separated by an exponentially large factor .alternatively , the timescales of real regulatory and readout processes could be such that the system does not have the time to equilibrate between the stable branches . in that case ,the history ( initial condition ) of the system will matter , and the final value of the output will be determined jointly by the input and the past state of the output , .such regulatory elements can be very useful , because they retain memory of the past and are able to integrate it with the new input ; a much studied biological example is that of a toggle switch .the information measure we use here , , will not properly capture the abilities of such elements , unless we modify it to include the past state , e.g. into : here both the input and current state together determine the output .such computations are beyond the scope of this paper , but could make precise our intuitions about switches with memory .multistability also allows for qualitatively new effects at higher noise levels . in our previous work we found that full information flow optimization ( without assuming small noise ) leads to higher capacities than a small noise calculation for an identical system and , moreover , that as noise grows , the optimal solutions start resembling a ( noisy ) binary switch where only the minimum and maximum states of input are populated in the optimal . at high noise , positive ( even bistable )autoregulation could stabilize these two states and make them more distinguishable . in this casethe design constraint for the genetic circuit is to use the smallest number of molecules that will prevent spontaneous flipping between the two branches on the relevant biological timescales . in this limit regulatory elementscan operate at high noise , with perhaps as few as tens of signaling molecules .with these caveats in mind , our main results can be summarized as follows . except at , the possibility of self interaction always increases the capacity of genetic regulatory elements . for ,the optimal strategy is self activation , while for it is self repression , as shown in fig [ f-6 ] .repression allows the system to reduce the effective noise at high input levels and straighten the input / output relation , packing more `` distinguishable '' signaling levels into a fixed input range .activation for small lengthens the effective integration time over which the ( dominant ) input noise contribution is averaged , thereby increasing information . the optimal level of self activation is never so strong as to cause bistability , but does , for small and large , push the optimal system towards the critical state .an interesting observation about the nature of the optimal solutions is that self activation which is strong enough to enhance information transmission may nonetheless not result in a functional input / output relation that looks very different from a system without self activation , albeit with different parameters . in such cases ,information transmission is enhanced primarily by the longer integration time and reduced effective noise level .this means that there need be no dramatic signature of self activation , so that diagnosing this operating regime requires a detailed quantitative analysis .more generally , this result emphasizes that the same phenomenology can result from different parameter values , or even networks with different topology in this case , with and without feedback . stepping back from the detailed results , our goal in this paper was to make progress on understanding the optimization of information flow in systems with feedback by studying the simplest example .the hope is that our results provide one building block for a theory of real genetic networks , on the hypothesis that they have been selected for maximizing information transmission . as discussed in previous work , a natural target for such analysis is the well studied gap gene network in the early _ drosophila _ embryo .but we can also hope to connect with a broader range of examples .the prevailing view of self activation has been that its utility stems from the possibility of creating a toggle ( or a flip - flop ) switch .this explanation , however , can only be true if self activation is strong enough to actually push the system into the bistable regime .de ronde and colleagues have improved on this intuition and have shown , in the linear response limit , that weak self activation will increase the signal to noise ratio for dynamic signals , a function very different from the switch . herewe show that in the fully nonlinear , but steady state treatment , monostable self activation can be advantageous for information transmission .furthermore , we show that there is a single control parameter , the ratio between the output and input noise strengths , which determines whether self activation or self repression is optimal . since more and more quantitative expression data is available , especially for bacteria and yeast, one could try assessing how the use of both motifs correlates with the concentrations of input and output signaling molecules .we thank t gregor , ef wieschaus , and especially cg callan for helpful discussions .work at princeton was supported in part by nsf grants phy0957573 and ccf0939370 , by nih grant r01 gm077599 , and by the wm keck foundation . for part of this work, gt was supported in part by nsf grant ef0928048 , and by the vice provost for research at the university of pennsylvania .m ronen , r rosenberg , bi shraiman & u alon ( 2002 ) assigning numbers to the arrows : parameterizing a gene regulation network by using accurate expression kinetics ._ proc natl acad sci usa _ * 99 : * 1055510560 .mb elowitz , aj levine , ed siggia & pd swain ( 2002 ) stochastic gene expression in a single cell . _ science _ * 297 * 11831186 .e ozbudak , m thattai , i kurtser , ad grossman & a van oudenaarden ( 2002 ) regulation of noise in the expression of a single gene ._ nature gen _ * 31 : * 6973 .wj blake , m kaern , cr cantor & jj collins ( 2003 ) noise in eukaryotic gene expression. _ nature _ * 422 : * 633637 .jm raser & ek oshea ( 2004 ) control of stochasticity in eukaryotic gene expression. _ science _ * 304 : * 18111814 .n rosenfeld , jw young , u alon , ps swain & mb elowitz ( 2005 ) gene regulation at the single cell level . _ science _ * 307 : * 19621965 .jm pedraza & a van oudenaarden ( 2005 ) noise propagation in gene networks ._ science _ * 307 : * 19651969 .
|
living cells must control the reading out or `` expression '' of information encoded in their genomes , and this regulation often is mediated by transcription factors proteins that bind to dna and either enhance or repress the expression of nearby genes . but the expression of transcription factor proteins is itself regulated , and many transcription factors regulate their own expression in addition to responding to other input signals . here we analyze the simplest of such self regulatory circuits , asking how parameters can be chosen to optimize information transmission from inputs to outputs in the steady state . some nonzero level of self regulation is almost always optimal , with self activation dominant when transcription factor concentrations are low and self repression dominant when concentrations are high . in steady state the optimal self activation is never strong enough to induce bistability , although there is a limit in which the optimal parameters are very close to the critical point .
|
in its simple form , a defaultable claim pays a certain pre - defined amount at the maturity of the contract , if there has not been a prior default , and pays zero otherwise . in this work ,an hedging analysis is carried out for these derivatives when the underlying risky asset is modeled by a finite variation lvy process .it is of mathematical and practical interest to study the hedging of defaultable claims when the asset prices are affected by jumps .the extension to more complicated derivatives and underlying processes will be interesting for future work .first , we review the literature and related previous works .we start by a definition of credit risk .credit risk is the risk associated with the possible financial losses of a derivative caused by unexpected changes in the credit quality of the counterparty s issuer to meet its obligations .the first paper that introduced credit risk for a path independent claim goes back to the work of merton ( 1974 ) .when analyzing a credit derivative , normally there are two prominent issues , pricing and hedging of the derivative .the latter is a more challenging question , especially when the market is incomplete . in most financial models ,even when working with simple stochastic processes , a complete hedge still may not be feasible for credit derivatives .there are different approaches to manage the risk in an incomplete market .quadratic hedging is a well developed and applicable method to manage the risk .schweizer ( 2001 ) or pham ( 1999 ) provide a good survey of quadratic hedging methods in incomplete markets . in schweizer ( 2001 )two quadratic hedging approaches are discussed for the case where the firm s value process is a semimartingale .these are local risk - minimization and mean - variance hedging . if we prefer a self - financing portfolio in order to hedge a contingent claim , we speak of mean - variance hedging .if we rather select a portfolio with the same terminal value as the contingent claim ( but not necessarily self - financing ) , we are in the context of a ( locally ) risk - minimizing approach .schweizer , heath and platen ( 2001 ) provide a comprehensive study and comparison of both approaches . in our paper a local risk - minimization approach is used to manage the risk associated with the defaultable claims .local risk - minimization hedging emerged in the development of the concept of risk minimization .fllmer and sondermann ( 1986 ) were among the first to deal with this problem .they solved the problem identifying the risk - minimization strategy when the underlying process is a martingale .the generalization to the local martingale case is done in schweizer ( 2001 ) .the solution of the risk - minimization problem is linked to the so called galtchouk - kunita - watanabe ( gkw ) decomposition assuming that the underlying process is a local martingale . for a non - martingale process , schweizer ( 1988 )provides an example of an attainable claim that does not admit a risk - minimization strategy .the extension is possible by putting more restrictive conditions on the underlying process as well as on the hedging strategies . literally saying , one has to pay more attention to the local properties of the problem .as for the role of the underlying process , it has to satisfy the structure condition is a square integrable special semimartingale with the canonical decomposition .then satisfies the structure condition , if there exists a predictable process such that for all , and the mean - variance tradeoff process , defined by , is -almost surely finite for all . ] ( sc ) , see schweizer ( 1991 ) or schweizer ( 2001 ) . under certain conditions like sc , a locally risk - minimizing strategy is equivalent to a more tractable one , called pseudo - locally risk - minimizing strategy .fllmer and schweizer ( 1991 ) gives a necessary and sufficient condition for the existence of a pseudo - locally risk - minimizing strategy .it turns out that finding these strategies is equivalent to the existence of a generalized version of the gkw decomposition , known as the fllmer - schweizer ( fs ) decomposition . a sufficient condition for the existence of an fs decompositionis provided by monat and stricker ( 1995 ) .although the existence of locally risk - minimizing strategies is proved under some conditions , it completely depends on the fs decomposition . in some special casesthere are constructive ways of finding this decomposition explicitly .the case of continuous processes is more flexible , and the well known method of minimal equivalent local martingale measure ( melmm ) is applicable .biagini and cretarola ( 2009 ) study general defaultable markets under a locally risk - minimizing approach. however the continuity of the underlying process is a crucial assumption in their work .most recently , choulli , vandaele and vanmaele ( 2010 ) find an explicit form of the fs decomposition based on a representation theorem .they aimed to provide a general framework under which the fs decomposition is obtained .while this work could fit into theirs , our approach leads to a more explicit form of the fs decomposition . by using a slightly different method ,we specifically focus on the hedging of the defaultable claims , based on the theory of local risk - minimization , assuming that the underlying process is a bounded variation lvy process with positive drift .our paper studies a structural model , in the sense that a default event is defined , and we use the whole market information represented by the filtration generated by the underlying process .however , while the default event is structural ( and so economically intuitive ) , we use an analysis like that of reduced form models and especially intensity based models .these models were pioneered by the works of artzner and delbaen ( 1995 ) or jarrow and turnbull ( 1995 ) , and they do not use or determine a default model of the firm .they use an intensity process or hazard process instead .martingale techniques and the idea of intensity in reduced form models are applied to analyze the structure of the defaultable claims . in section [ sec : mp ] , under some conditions , a compensation formula is used to find a canonical decomposition of the defaultable process , where is a hitting time ( defining the default time ) and is a real valued function .this enables us to use compensator techniques for these types of processes .the predictable finite variation part of this decomposition is absolutely continuous with respect to the lebesgue measure .hence , when is a constant function , this precisely determines the intensity of .this intensity is already obtained by theorem 1.3 of guo and zeng ( 2008 ) for a hunt process that has finitely many jumps on every bounded interval . however , a finite variation lvy process could have infinitely many jumps on bounded intervals and so some modifications of their ideas would be essential .note that in our analysis the underlying process allows for jumps , the payoff is linked to a structural default event , and the probability measure is not necessarily a martingale measure .in addition we do not use any type of girsanov s theorem , but the results are based on solutions of partial integro - differential equations ( pide ) .we also study the structure of the default indicator process and finite horizon ruin time . apart from the theoretical concerns in this paper , the main effort is devoted to obtain answers to two interesting questions .the first question is , given a defaultable claim , how a locally risk - minimizing hedging strategy can be carried out . as it is not possible to eliminate the credit risk completely ,the second question is whether it is possible to design a customized defaultable security , to make the product completely hedgeable i.e. , the claim can be written as the sum of a constant and a stochastic integral with respect to the underlying process .this will result in a risk - free defaultable claim .in our setup we find necessary and sufficient conditions for the existence of such a product .the paper is structured as follows .the model , some preliminary assumptions , and results are provided in section [ sec : mp ] . a canonical decomposition of the stochastic process discussed in section [ sec : drg ] .this is an essential tool in our analysis .locally risk - minimizing hedging strategies for defaultable claims are obtained in section [ sec : hsdc ] . in section [ sec :edtpt ] , we take a look at the structure of the default time .we study a process , modeling a firm s assets value , constructed on a probability space and we denote by its natural filtration , completed and regularized so that it satisfies the usual conditions .we study defaultable claims with actual payoffs of the form where , is a real valued function , and is the maturity or expiration time of the security , and note that the firm s assets value is assumed to be observable .therefore from a financial point of view , the definition in for default makes sense if either the modeler is the firm s management , the accounting data are publicly available or they can be well estimated in the market .the security in pays if there is no default in ] and , respectively , stand for quadratic covariation and conditional quadratic covariation , see section 6 , chapter slowromancap2@ and section 5 , chapter slowromancap3@ of protter ( 2004 ) or section 4 , chapter slowromancap1@ of jacod and shiryaev ( 1987 ) for the definitions . for the sake of completeness , we recall some basic definitions .the set of all uniformly integrable martingales is denoted by , and is the set of all square integrable martingales , i.e. the set of all martingales such that <\infty ] the two processes and belonging to are called orthogonal to each other if belongs to .if the processes and belong to , then it can be proved that is orthogonal to if and only if , for example see theorem 4.2 , chapter slowromancap1@ of jacod and shiryaev ( 1987 ) .if and are two local martingales and ] , one can still show that is orthogonal to if and only if . a similar result to proposition [ prop:[x , x]=0 ] is still true for the conditional quadratic variations as well , andit is in fact a result that we use later .[ corol : < x , x>=0 ] suppose that belongs to or \in\mathscr a_{loc} ] , see proposition 4.50 , chapter slowromancap1@ of jacod and shiryaev ( 1987 ) .so it is enough to prove the result for \in\mathscr a_{loc} ] .now , we explain our model assumptions .this work is motivated by the first basic question of how the riskiness of a corporate bond can be managed .such bonds represent a special defaultable claim for the firm .specifically we focus on finite variation lvy processes modeling the firm s assets value. see geman ( 2002 ) for some motivations on how these processes model the dynamic of stock prices better than diffusion or jump - diffusion models .besides , some technical reasons also motivate this choice. the following hypothesis is used throughout the paper and especially in section [ sec : drg ] to find the canonical decomposition of the process , where is a real - valued function .[ hyp : x - integrability1 ] it is assumed that the firm s assets value process , starting at , is a bounded variation lvy process with lvy triplet , where the lvy measure is concentrated on .the process has the following lvy - it decomposition where }x\;v(dx) ] . first note that given the integrability condition of hypothesis , the expression is well defined in the sense that the integrals are finite .the pide in definition [ assumption : * ] will help to obtain strategies for the case when is not a martingale .this assumption can be thought of as a substitution for the change of probability measure . in general , the existence of a classical solution for the pide in definition [ assumption : * ] is not always guaranteed .however , if ( i.e. when is a martingale ) , then under some regularity conditions a classical solution can be provided by feynman - kac s representations . for a full discussion , examples , and many useful references ,we refer the reader to chapter 12 of cont and tankov ( 2004 ) . in short, the main problem is that since we are in a pure jump model , there is no diffusion and hence the proposed feynman - kac representation is not necessarily . in cases where this smoothness holdsthen the feynman - kac representation is in fact a solution .examples 1 and 2 of cont , tankov and voltchkova ( 2004 ) show how the regularity can be easily violated .if the smoothness does not hold or when is non - zero , then some approximation techniques must be used ; in practice , viscosity solutions can be applied . extending the results to the non - smooth caseis left for future work .finally , it is supposed that the market is frictionless and made of only two assets , a risky asset modeled by a process satisfying hypothesis [ hyp : x - integrability1 ] , and a risk - free one . for simplicity, it is supposed that the value of the risk - free asset is equal to 1 at all times , i.e. the interest rate is zero .in this section we investigate the canonical decomposition of the process , where and is a function .more precisely , under some conditions we prove that it is a special semimartingale and we find a closed form for its finite variation predictable part .this result is used in section [ sec : hsdc ] . [ theorem : g - compensator - second approach ] assume that satisfies hypothesis [ hyp : x - integrability1 ] .let be a function that satisfies the integrability condition of hypothesis [ hyp : integrability ] on .then the process , where , is a special semimartingale and the process is an - local martingale , where the stopping time is defined by and the operator is given by .because the function is a function , the process is a semimartingale and so by using the product formula of semimartingales , for we have .\ ] ] to get the canonical decomposition of , we prove that the processes defined by each of the three terms on the right - hand side of the above equation are special semimartingales and obtain their canonical decomposition .the rest of the proof is divided into four steps. * step 1 . *since is a function , by applying it s formula , we have that see theorem 4.2 of kyprianou ( 2006 ) for a proof . by the compensation formula ,see theorem 4.4 of kyprianou ( 2006 ) , we get that =\\ & & { \mathbb{e}}[\int_0^t\int_{\mathbb r}h_s\big(f(s , x_{s^-}+y)-f(s , x_{s^-})\big)\;v(dy)ds ] , \end{aligned}\ ] ] for all bounded non - negative predictable processes , with the understanding that one of the expectations is well defined if and only if the other one is well defined as well and they are equal . hence by the integrability condition of hypothesis [ hyp : integrability ] and using corollary 4.5 of kyprianou ( 2006 ), one can show that for , where is an - local martingale and is a predictable finite variation process .the process is given by , where the operator is defined by this proves that and hence are special semimartingales .therefore since the first term on the right - hand side of the above is a local martingale , the first term of is a special semimartingale and its predictable finite variation part is then given by * step 2 . *since is a special semimartingale , the second term of is also a special semimartingale . to find its canonical decomposition , we consider two cases .first , in this case , the process is a compound poisson process plus drift starting at , and there are finitely many jumps on every bounded interval .furthermore , since , one can easily check that for every , . also by remark [ remark : tist ] , the stopping time is now totally inaccessible , hence theorem 1.3 of guo and zeng ( 2008 ) is applicable , and so the process }f(s , x_{s})1_{\{\tau>{s}\}}\;v(dy)\;ds\right)_{t\geq0}\ ] ] is an - local martingale .now assume that , then by theorem 21.3 of sato ( 1999 ) there are infinitely many number of jumps on every bounded interval. therefore the result of guo and zeng ( 2008 ) is not directly applicable this time .however , since , every is an irregular point for ] is almost surely finite , and so by lemmas 3.10 and 3.11 of chapter slowromancap1@ of jacod and shiryaev ( 1987 ) , the process }1_{\{\tau> s\}}\;v(dy)\;ds\right)_{t\geq0} ] is equal to }(f(s , y)-f(s , x_{s}))\;v(dy - x_{s})\;ds\right],\ ] ] with the understanding that one of the expectations is well defined if and only if the other one is well defined as well and they are equal . because of the integrability condition of hypothesis [ hyp : integrability ] , the assumptions of lemma [ lem : compensator - second approach ] are in force , and therefore the process -\int_0^t\int_{(-\infty,0]}(f(s , y)-f(s , x_{s}))1_{\{\tau > s\}}\;v(dy - x_{s})\;ds\right)_{t\geq0}\ ] ] is an - local martingale .hence the predictable finite variation part of the third term in is given by }(f(s , y)-f(s , x_{s}))1_{\{\tau > s\}}\;v(dy - x_{s})\;ds\right)_{t\geq0}.\ ] ] * step 4 .* from equations , , and , we conclude that the predictable finite variation part of the process is equal to }1_{\{\tau > s\}}f(s , x_{s})\;v(dy)\;ds \\ & + \int_0^t\int_{(-\infty,0]}(f(s , y)-f(s , x_{s}))1_{\{\tau > s\}}\;v(dy - x_{s})\;ds\big)_{t\geq0}. \end{aligned}\ ] ] notice that in any of the above integrands , the strict inequality of the indicator process can be changed to include an equality , because the lebesgue measure does not charge . from the above equation and since , after some manipulations , it concludes that the process is an - local martingale .hence the predictable finite variation part of the process is equal to where is given by . 1 .regarding theorem [ theorem : g - compensator - second approach ] , a few comments are worth of mentioning . in the proof of the theorem ,the assumption of hypothesis [ hyp : x - integrability1 ] was not used .the operator given by is not the same as dynkin s or it s operators .theorem [ theorem : g - compensator - second approach ] still holds for a \times{\mathbb{r}}) ] , .finally , using lemmas 3.10 and 3.11 of chapter slowromancap1@ of jacod and shiryaev ( 1987 ) , the canonical decomposition of the theorem shows that the process belongs to 2 . note that if the derivative of is bounded , the integrability condition of definition [ assumption : * ] is satisfied . in particular, this shows that admits a compensator that is absolutely continuous with respect to the lebesgue measure ; in other words , the following process is an - local martingale )\;ds\right)_{t\geq0}.\ ] ] + also , for a constant function , and a compound poisson process plus drift , theorem [ theorem : g - compensator - second approach ] is a result of theorem 1.3 in guo and zeng ( 2008 ) .in this section our goal is to obtain locally risk - minimizing hedging strategies for the credit sensitive security with payoff in .if the underlying process is a ( local ) martingale , local risk - minimization reduces to risk - minimization and the existence of the hedging strategies is guaranteed by a gkw decomposition .when the process is a semimartingale then risk - minimization is no longer valid .it must be improved to local risk - minimization and the hedging strategies are solved by the fs decomposition .the fs decomposition was first introduced by fllmer and schweizer ( 1991 ) .the existence of the fs decomposition of a square - integrable claim is proved even for a -dimensional semimartingale by schweizer ( 1994 ) , assuming that the process satisfies the sc condition and the mean - variance tradeoff ( mvt ) process is uniformly bounded in ( belongs to ) , and and has jumps strictly bounded from above by 1 . monat and stricker ( 1994 ) prove the existence of the fs decomposition just by assuming that the mvt process is uniformly bounded in and . under this condition , further monat and stricker ( 1995 ) prove also the uniqueness .choulli , krawczyk and stricker ( 1998 ) find necessary and sufficient conditions for the existence and uniqueness of the fs decomposition by introducing a new notion for martingales .they prove that there is an fs decomposition for a square - integrable claim under the semimartingale , if first , the process satisfies an integrability condition and second if it is regular " ( we refer to the original paper for a definition ) . herethe process is the dolans - dade exponential process , see section 8 , chapter slowromancap2@ of protter ( 2004 ) .choulli , vandaele and vanmaele ( 2010 ) discuss the relationship between the gkw and fs decompositions assuming that is strictly positive .then in a general framework , under a weaker assumption that does not require the strict positivity of , they find a closed form of the fs decomposition based on a representation theorem , theorem 2.1 of their paper. their general framework can cover our specific model .however in contrast to theorem 2.1 of their paper , theorem [ theorem : g - compensator - second approach ] of our work leads to more explicit solutions for hedging strategies .in addition , despite current methods that normally start from a payoff and then construct a value process , we somehow turn this around and present self - contained calculations for the components of the fs decomposition .assume that processes and belong to on ] , \in\mathscr a_{loc} ] belongs to .then for all , the following decomposition holds up to an evanescent set ] and specifically for , one obtains where the function is introduced in definition [ assumption : * ] , and the process , , is a local martingale , orthogonal to the martingale part of , i.e. .since belongs to class ( * ) , by theorem [ theorem : g - compensator - second approach ] , there are the following - local martingales and on ] .therefore ] , proposition 4.50 , chapter slowromancap1@ of jacod and shiryaev ( 1987 ) shows that \in\mathscr a_{loc} ] and therefore the conditional quadratic variation of , as the compensator of ] , the compensator is the same for the two processes and to get , it is enough to obtain .integration by parts for semimartingales on ] becomes -(f^{(2)}_t-\int_0^t x_{s^-}\;df_s^{(1)}-\int_0^tz_{s^{-}}\;d\lambda_s ) \\ & & \qquad\qquad\qquad = m_t^{(2)}-\int_0^tx_{s^{-}}dm^{(1)}_s-\int_0^tz_{s^{-}}dm_s .\end{aligned}\ ] ] the integrals on the right - hand side of the above equality are local martingales , the process is a predictable finite variation process , and =[m^{(1)},m] ] , almost surely , where is the lebesgue measure .this implies that almost surely we have . on the other hand, satisfies the pide of definition [ assumption : * ] , therefore and the gkw decomposition becomes because functions and satisfy the integrability condition of hypothesis [ hyp : integrability ] , both integrals and are almost surely finite for all .therefore for all , and so the term are well defined and almost surely finite . hence one can move the integral on the left - hand side to the other side of the equality .this gives the decomposition in .finally the decomposition in is obtained by letting in equation and noticing that by definition [ assumption : * ] , , almost surely .[ remark : referee ] in the proof of theorem [ theorem : lhr ] , the pide of definition [ assumption : * ] was used at last to obtain equation . on the other hand , the gkw decomposition is still valid whether or not and satisfy this pide .in fact , the integrability condition of hypothesis [ hyp : integrability ] is all that is needed .note that the calculations of theorem [ theorem : lhr ] , especially equation along with corollary 3.16 of choulli , krawczyk and stricker ( 1998 ) show that is an - local martingale and is a local martingale where .then in comparison to proposition 4.2 of choulli , vandaele and vanmaele ( 2010 ) , this suggests that should be the value of the hedging portfolio .proposition [ prop : hedging strategies ] confirms this . in the special casewhen the process is a local martingale , we have the following corollary .[ corollary : lhr ] assume that hypothesis [ hyp : x - integrability1 ] holds .let the function belongs to class ( * ) and the process ] .assume that is a square integrable special semimartingale with the canonical decomposition .then is the space of all predictable processes such that +(\int_0^t|\theta_s\;da_s|)^2\right]<\infty ] .[ prop : hedging strategies ] assume that hypothesis [ hyp : x - integrability1 ] holds and let the function belongs to class ( * ) .we further suppose that for all , belongs to and the process given by is in .then there exists a locally risk - minimizing - strategy as follows .the number of shares to be invested in the risky asset is given by .the hedging error belongs to .it is orthogonal to and given by the value process of the portfolio associated with the strategy is equal to the number of shares to be invested in the risk - free asset is and finally the cost process is given by the process satisfies the sc condition .therefore the existence of an - strategy is equivalent to the existence of the fs decomposition .notice that for all , belongs to , and so by proposition 4.50 , chapter slowromancap1@ of jacod and shiryaev ( 1987 ) , the process ] and belongs to .now the result follows from proposition 3.4 of schweizer ( 2001 ) .a similar result to proposition [ prop : hedging strategies ] can be obtained when is a local martingale , but with a simpler form for the strategy .notice that although we did not use the melmm method , we have paid the price by involving a pide . in the melmm methodwhen the underlying process is a martingale the problem of finding the hedging strategies is simpler .here , the same happens , if the underlying process is a martingale , the pide to solve for the hedging strategy has a simpler form .the next theorem investigates necessary and sufficient conditions under which the process in theorem [ theorem : lhr ] vanishes .[ theorem : l=0 ] assume that hypothesis [ hyp : x - integrability1 ] holds and the function belongs to class ( * ) .suppose that the integrability condition of hypothesis [ hyp : integrability ] is met on ] and the process ] , if and only if for all and all . in this case , for all , we have the following , up to an evanescent set , and specifically for , one obtains since ] .now the predictability of and uniqueness of conditional quadratic variation give for the second term of , since =[m^{(1)},m] ] , computing the second term follows from where was already computed in the proof of theorem [ theorem : lhr ] , see equations and .the process ] if and only if on \times\mathbb r^+ ] and this gives equations and . by combining theorem [ theorem : l=0 ] and proposition [ prop : hedging strategies ] ,we get the following result that provides a necessary and sufficient condition for the existence of a risk - free product . in the context of jump - diffusion processes , kunita ( 2010 ) answers a similar question for path independent payoffs . [ prop : risk - free product ] assume that hypothesis [ hyp : x - integrability1 ] holds and the function satisfies definition [ assumption : * ] .suppose that the integrability condition of hypothesis [ hyp : integrability ] is met on ] is given in figure [ fig : the exact graph of the function f for exponential jump size distribution ] for , , , and . with exponential jumps .] the function can also be estimated numerically by simulation . with the same parameters as above , figure [ fig :the estimated graph of the function f for exponential jump size distribution ] gives the graph of an estimation of on \times[0,0.4] ] into 1000 equal subintervals .it is assumed that the trading dates are given by , for , where .then the number of shares invested in the risky asset is given by }(t),\ ] ] where each is a bounded -measurable random variable that is determined right after the transaction .this is due to the fact that a realistic strategy must be left continuous or predictable .the integral also must be discretized .this is essential to obtain the observed values of the process .figure [ fig : the strategy corresponding to a sample path of the process x ] shows the simulated sample path of the process , for , together with the number of shares invested in the risky asset to be held in each trading period . as figures [ fig : the exact graph of the function f for exponential jump size distribution ] or [ fig : the estimated graph of the function f for exponential jump size distribution ] confirm , the probability of crossing the barrier is relatively high for this process , . for the sample path of the process shown in figure [ fig : the strategy corresponding to a sample path of the process x ] , the default happens at . at this time, the number drops to zero and remains in this state until the maturity of the contract . [ cols="^,^ " , ] similar graphs can be obtained for the number of shares of the risk - free asset , the value of the portfolio , the error term , and the cost process .[ sec : edtpt ] in this section , we discuss the structure and the distribution of the default time .it is assumed that satisfies hypothesis [ hyp : x - integrability1 ] .some of the results of section [ sec : hsdc ] can be helpful to understand the structure of the default time .regarding theorem [ theorem : lhr ] , one can let be the constant function so without almost any effort , we have the following decomposition .[ prop : default time - decomposition1 ] assume that hypothesis [ hyp : x - integrability1 ] holds .then for all , we have the following decomposition up to an evanescent set and specifically for , one obtains where the function is introduced in definition [ assumption : * ] , the process is given by equation , and the process , , is a local martingale orthogonal to the martingale part of , i.e. .notice that since the process is square - integrable , the process \right)_{0\leq t\leq t} ] belongs to .then for all , the following decomposition holds up to an evanescent set and especially for , one obtains where the process is given by and the process , , is a local martingale orthogonal to the process .the result basically follows from , remark [ remark : referee ] , equation , and then by simplifying equation using . as a special caselet , then by taking the expectation of both sides of , we obtain , and is the solution of the pide in proposition [ prop : default time - decomposition2 ] .finding the distribution of the default time using a pide is already known ; for example , see theorem 11.3.3 and its proof in rolski et al .( 1999 ) where this pide is obtained for a compound poisson process plus drift .[ example : finite horizon ruin time ] let be the same process as example [ ex : example1 ] and define the function by . apply proposition [ prop : default time - decomposition2 ] for , then , and note that the above function is a special one that makes the operator zero and hence the process is a martingale .this martingale can also be obtained from theorem [ theorem : g - compensator - second approach ] .therefore we have the following identity =f(u).\ ] ][ sec : c ] in this paper , first a canonical decomposition of the processes was studied under some conditions . then based on this result, the locally risk - minimizing approach was carried out to obtain hedging strategies for certain structural defaultable claims under finite variation lvy processes .the analysis is done simultaneously , when the underlying process has jumps , the security is linked to a default event , and the probability measure is a physical one .this approach does not use the melmm method or any type of girsanov s theorem to obtain the strategies .however , the final answer is based on the solution of a pide . besides , some theoretical results in finite horizon ruin time were obtained .the authors are grateful to the associate editor and anonymous referees for their constructive comments .the first author is also very thankful to friedrich hubalek and julia eisenberg for many useful discussions .99 artzner , p. and delbaen , f. , 1995 . default risk insurance and incomplete markets ._ mathematical finance _ , * 5 * , 187195 .biagini , f. and cretarola , a. , 2009 .local risk minimization for defaultable markets ._ mathematical finance _ , * 19 * ( 4 ) , 669689 .choulli , t. , krawczyk , l. and stricker , c. , 1998 .-martingales and their applications in mathematical finance ._ annals of probability _ , * 26 * ( 2 ) , 853876 .choulli , t. , vandaele , n. and vanmaele , m. , 2010 .the fllmer - schweizer decomposition : comparison and description ._ stochastic processes and their applications _ , * 120 * ( 6 ) , 853872 .cont , r. and tankov , p. , 2004 ._ financial modelling with jump processes_. chapman & hall / crc financial mathematics series : boca raton .cont , r. , tankov , p. and voltchkova , e. , 2004 .option pricing models with jumps : integro - differential equations and inverse problems , in _european congress on computational methods in applied sciences and engineering _ , eds .p. neittaanmki et al ., 2428 .fllmer , h. and schweizer , m. , 1991 .hedging of contingent claims under incomplete information ._ applied stochastic analysis _ , * 5 * , 389414 .fllmer , h. and sondermann , d. , 1986 .hedging of non - redundant contingent claims , in _ contributions to mathematical economics _ ,w. hilderbrand and a. mas - collel , 205223 .geman , h. , 2002 .pure jump lvy processes for asset price modelling . _ journal of banking and finance _ , * 26 * , 12971316 .guo , x. and zeng , y. 2008 .intensity process and compensator : a new filtration expansion approach and the jeulin yor formula ._ the annals of applied probability _ , * 18 * ( 1 ) , 120142 .jacod , j. and shiryaev , a.n ., 1987 . _ limit theorems for stochastic processes_. springer : berlin .jarrow , r.a . andturnbull , s.m .pricing derivatives on financial securities subject to credit risk ._ journal of finance _ , * 50 * ( 1 ) , 5386 .kunita , h. , 2010 .it s stochastic calculus : its surprising power for applications ._ stochastic processes and their applications _ , * 120 * ( 5 ) , 622652 .kunita , h. and watanabe , s. , 1967 . on square integrable martingales ._ nagoya mathematical journal _ , * 30 * , 209245 .kyprianou , a.e . , 2006 ._ introductory lectures on fluctuations of lvy processes with applications_. springer - verlag : berlin heidelberg .merton , r.c ., 1974 . on the pricing of corporate debt :the risk structure of interest rates ._ journal of finance _ , * 29 * , 449470 .monat , p. and stricker , c. 1994 .fermeture de et de ._ in sminaire de probabilits xxviii , lecture notes in mathematics _ , 189194 .monat , p. and stricker , c. 1995 .fllmer - schweizer decomposition and mean - variance hedging of general claims ._ annals of probability _ , * 23 * , 605628 .pham , h. , 2000 . on quadratic hedging in continuous time ._ mathematical methods of operations research _ , * 51 * ( 2 ) , 315339 .protter , p. , 2004 ._ stochastic integration and differential equations_. springer - verlag : berlin .rolski , t. , schmidli , h. , schmidt , v. and teugels , j. , 1999 . _ stochastic processes for insurance and finance_. john wiley & sons : chichester .sato , k. i. , 1999 ._ lvy processes and infinitely divisible distributions_. cambridge university press : cambridge .schweizer , m. , 1988 .hedging of options in a general semimartingale model ._ dissertation eth zrich 8615_. schweizer , m. , 1991 .option hedging for semimartingales ._ stochastic processes and their applications _ , * 37 * ( 2 ) , 339363 .schweizer , m. , 1994 .approximating random variables by stochastic integrals .annals of probability _ , * 22 * ( 3 ) , 15361575 .schweizer , m. , 2001 . a guided tour through quadratic hedging approaches . in _option pricing , interest rates and risk management , cambridge university press _ , eds .e. jouini , j. cvitanic and m. musiela , 538574 .schweizer , m. , heath , d. and platen , e. , 2001 .a comparison of two quadratic approaches to hedging in incomplete markets ._ mathematical finance _ , * 11 * ( 4 ) , 385413 .in what follows the concept of creeping and some technical results are discussed .[ def : creeping ] assume that the process is a lvy process such that ( resp . ) . let the stopping time be defined as then creeps over ( resp . creeps down ) the level ( resp . ) , when [ theorem : creeping ] suppose that is a bounded variation lvy process which is not a compound poisson process with the characteristic exponent } ] for all predictable processes that are non - negative and bounded ( in the sense that for each such predictable process , there is an upper bound free from and such that for all and ) .then is a local martingale .[ lem : finitevalue ] assume that the process satisfies hypothesis [ hyp : x - integrability1 ] , and is given by .then for every , }1_{\{\tau > s\}}\;v(dy)\;ds ] ( henceforth denoted by ) is well - defined and equal to }1_{\{\tau > s\}}\;v(dy)\;ds ] .however , from fubini - tonelli theorem , we have \cap\{s;x_s=0\})\right]}={\mathbb e\left[\int_0^t1_{\{x_s=0\}}\;ds\right]}=\int_0^t{\mathbb{p}}(x_s=0)\;ds=0,\ ] ] where the last equality is due to continuity of distribution of .therefore , \cap\{s;x_s=0\}) ] , the usual convention of measure theory is applied , i.e. . ] .on the other hand , the process is quasi - left - continuous ( see lemma 3.2 of kyprianou ( 2006 ) ) which concludes that for every , , hence by a similar argument as above , the following equality holds almost surely \times(-\infty ,-x_s]}1_{\{\tau > s\}}1_{\{x_s>0\}}1_{\{x_{s^-}>0\}}\;m\times v(ds\times dy).\ ] ] * step 2 .* here we show that .note that since is not predictable , quasi - left - continuity is not applicable .first , we have that , and , by remark [ remark : tist ] .hence }\\ & = { \mathbb e\left[\int_0^\infty\int_{\mathbb{r}}\phi(s , x)\;j_x(ds\times dx)\right]}\\ & = { \mathbb e\left[\int_0^\infty\int_{\mathbb{r}}\phi(s , x)\;v(dx)\times ds\right]}\\ & = { \mathbb e\left[m(s\in[0,\infty);x_{s^-}=0)\times v({\mathbb{r}}-{0})\right]},\end{aligned}\ ] ] where is predictable , and so the compensation formula is applicable . by a similar argument to step 1, we deduce that , almost surely .therefore * step 3 . * from step 1 , we almost surely have \times(-\infty,-\inf_{0\leq s < t\wedge\tau}x_s]\}}1_{\{\tau >s\}}1_{\{x_s>0\}}1_{\{x_{s^-}>0\}}\;m\times v(ds\times dy).\ ] ] using step 2 and the fact that the process is cdlg , one can show that almost surely , hence \}}1_{\{y\in(-\infty,-\alpha(t,\tau)]\}}1_{\{\alpha(t,\tau)>0\}}\;m\times v(ds\times dv)\\ & = tv\left(-\infty,-\alpha(t,\tau)\right)1_{\{\alpha(t,\tau)>0\}}.\end{aligned}\ ] ] because is a radon measure this shows that , almost surely .
|
in the context of a locally risk - minimizing approach , the problem of hedging defaultable claims and their fllmer - schweizer decompositions are discussed in a structural model . this is done when the underlying process is a finite variation lvy process and the claims pay a predetermined payout at maturity , contingent on no prior default . more precisely , in this particular framework , the locally risk - minimizing approach is carried out when the underlying process has jumps , the derivative is linked to a default event , and the probability measure is not necessarily risk - neutral . * keywords * defaultable claims , hedging strategy , locally risk - minimizing , + fllmer - schweizer decomposition , galtchouk - kunita - watanabe decomposition
|
a metapopulation model refers to populations that are spatially structured into assemblages of local populations that are connected via migrations .each local population evolves without spatial structure ; it can increase or decrease , survive , get extinct or migrate in different ways .many biological phenomena may influence the dynamics of a metapopulation ; species adopt different strategies to increase its survival probability . see hanski for more about metapopulations .+ some metapopulations ( such as ants ) live in colonies that thrive for a while and then collapse . upon collapsevery few individuals survive .the survivors start new colonies at other vertices that thrive until they collapse , and so on . in this paperwe introduce a spatial stochastic process to model this population dynamic .this paper is divided into five sections . in section 2we define a spatial stochastic process for colonization and collapse and present some of its properties . in section 3 the main results are established . in section 4we introduce a non - spatial version of our model and compare it to other models known in the literature .finally , in section 5 we prove the results stated in section 3 .we denote by a connected non - oriented graph of locally bounded degree , where is the set of vertices of , and is the set of edges of of .vertices are considered neighbors if they belong to a common edge .the _ degree _ of a vertex is the number of edges that have as an endpoint .a graph is _ locally bounded _ if all its vertices have finite degree .a graph is _ _ if all its vertices have degree the distance between vertices and is the minimal amount of edges that one must pass in order to go from to by we denote the graph whose set of vertices is and the set of edges is besides , by , , we denote the degree homogeneous tree .+ at any time each vertex of may be either occupied by a colony or empty .each colony is started by a single individual .the number of individuals in each colony behaves as a yule process ( i.e. pure birth ) with birth rate . to each vertexis associated a poisson process with rate 1 in such a way that when the exponential time occurs at a vertex occupied by a colony , that colony collapses and its vertex becomes empty . at the time of collapseeach individual in the colony survives with a ( presumably small ) probability or dies with probability .each individual that survives tries to found a new colony on one of the nearest neighbor vertices by first picking a vertex at random . if the chosen vertex is occupied , that individual dies , otherwise the individual founds there a new colony .we denote by the colonization and collapse model . + the is a continuous time markov process whose state space is and whose evolution ( status at time ) is denoted by . for a vertex , means that at the time there are individuals at the vertex .we consider .let be a , starting with a finite number of colonies .if we say that _ survives ( globally)_. otherwise , we say that _ dies out ( globally)_. if the process starts from an infinite number of colonies , then which means that survives with probability 1 .still we can see local death according to the following definition .let be a we say that _ dies locally _ if for any vertex there is a finite random time such that for all .otherwise we say that _ survives locally_. local death corresponds to a finite number of colonizations for every vertex .it is clear that global death implies local death but the opposite is not always truth . as an example consider a . in this case is a symmetric random walk on , therefore transient , which implies that dies locally but survives globally . by coupling argumentsone can see that is a non - decreasing function of and also of .so we define where is a fixed vertex , and is the law of the process starting with one colony at . the function is non - increasing on .moreover , and .let be a with .we say that exhibits _ phase transition _( on ) if using coupling arguments , we can construct and be two copies of such that for all times , provided that .this monotonic property implies that if survives , also does .moreover , if dies out ( or dies locally ) then does too . observe that , as the number of individuals per vertex is not bounded , it is conceivable that the process survives on a finite graph .next we show that it does not happen .[ p : extinctionfinitevolume ] for any finite graph and starting from any initial configuratiion , the colonization and collapse process dies out .let be the probability at a colony collapse time , zero individuals attempts to found new colonies at neighboring vertices .from the fact that the probability that a yule process , starting from one individual , has individuals at time is we have that let and be the number of vertices of and its maximum degree , respectively . in order to show sufficient conditions for extinction we couple the number of colonies in the original model to , the following continuous time branching process .each individual is independently associated to an exponential random variable of rate 1 in such a way that , when its exponential time occurs , it dies with probability or is replaced by individuals with probability .we also consider a restriction that makes the total number of individuals always smaller or equal than by suppressing the births that would make larger than .let be the number of colonies at time in the colonization and collapse process . at time . moreover , we couple each individual in to a colony in the colonization and collapse process by using the same exponential random variable of rate 1 . when an exponential occurs there are two possibilities . with probability both and decrease by 1 .with probability the process grows by individuals and grows by at most colonies .this is so because in the colonization process we have spatial constraints and attempted colonizations only occur at vertices that are empty .hence , new colonies correspond to births for .we couple each new colony to a new individual in by using the same mean 1 exponential random variable .this coupling yields for all note now that and that is a finite markov process with an absorbing state .hence , dies out with probability 1 .so does and therefore the colonization and collapse process .next , we show sufficient conditions for global extinction and local extinction for the colonization and collapse process on infinite graphs .[ extinction ] let be a -regular graph , a and where is the beta function . *if and , then dies out locally and globally .* let if and for every in then dies locally .* let if and for every in then dies locally .observe that for all and fixed , there exists such that .furthermore , can be expressed in terms of the gauss hypergeometric function ( see luke ) , next , we show sufficient conditions for survival for colonization and collapse process on some infinite graphs .[ t : survival ] for and large enough , the with or survives globally and locally . from theorems [ extinction ] and [ t : survival ]it follows that for or and , there exists phase transition ( on ) for , there exists a function such that the survival and extinction regime for can be represented as in figure [ f : transicaoant ] . with , width=226 ]so called catastrophe models have been studied extensively and are quite close to our model , see kapodistria et al . for references on the subject . particularly relevant is the birth and death process with binomial catastrophes , see example 2 in brockwell .we now describe this model .it is a single colony model .each individual gives birth at rate and dies at rate .moreover , catastrophes ( i.e. collapses ) happen at rate .when a catastrophe happens every individual in the colony has a probability of surviving and of dying , independently of each other .brockwell has shown that survival ( i.e. at all times there is at least one individual in the colony ) has positive probability if and only if and hence , there is a critical value for , the single colony model survives if and only if . + next we introduce a non spatial version of our model and compare it to the catastrophe model above .consider a model for which every individual gives birth at rate and dies at rate .we start with a single individual and hence with a single colony .when a colony collapses individuals in the colony survive with probability and die with probability independently of each other .every surviving individual founds a new colony which eventually collapses .colonies collapse independently of each other at rate .the proof in schinazi may be adapted to show that survival has positive probability if and only if >1,\ ] ] where has a rate exponential distribution .it is easy to see that if then the expected value on the l.h.s .is and the inequality holds for any .it is also easy to see that the inequality can not hold if .hence , from now on we assume that after computing the expected value and solving for we get that survival is possible if and only if that is , when the model with multiple colonies has a critical value the multiple colonies model survives if and only if . since for all we have that for any .hence , it is easier for the model with multiple colonies to survive than it is for the model with a single colony . that is , living in multiple smaller colonies is a better survival strategy than living in a single big colony .note that this conclusion was not obvious .the one colony model has a catastrophe rate of while the multiple colonies model has a catastrophe rate of if there are colonies .moreover , a catastrophe is more likely to wipe out a smaller colony than a larger one . on the other hand multiple coloniesgive multiple chances for survival and this turns out to be a critical advantage of the multiple colonies model over the single colony model .for being a process , we know that some colonization attempts will not succeed because the vertex on which the attempted colonization takes place is already occupied .this creates dependence between the number of new colonies created upon the collapse of different colonies . because of this lack of independence, explicit probability computation seems impossible . in order to prove theorem [ extinction ] , we introduce a branching - like process which dominates , in a certain sense , and for which explicit computations are possible .this process is denoted by and defined as follows .+ * auxiliary process :* + each vertex of might be empty or occupied by a number of colonies .each colony starts from a single individual .the number of individuals in a colony at time is determined by a pure birth process of rate .each colony is associated to a mean 1 exponential random variable .when the exponential clock rings for a colony it collapses and each individual , independently from everything else , survives with probability or dies with probability .each individual who survives tries to create a new colony at one of the nearest neighbor vertices picked at random . at every neighboring vertex we allow at most one new colony to be created .hence , in the process when a colony placed at vertex collapses , it is replaced by 0,1 , .. , or degree new colonies , each new colony on a distinct neighboring site of . + observe that birth and collapse rates are the same for colonies in and .to each colony created in process corresponds a colony created in the process . but not every colony created in the process has its correspondent in the process .techniques such as in liggett ( * ? ? ?* theorem 1.5 in chapter iii ) can be used to construct the processes and in the same probability space in such a way that , if they start with the same initial configuration , if there is a colony of size on a vertex for then there is at least on colony of size at least for on the same vertex . where the last equality is obtained by the substitution and the definition of the beta function .+ enumerate each neighbor of vertex from 1 to , then where is the indicator function of the event \{a new colony is created in the neighbor of .}. hence , =\sum_{i=1}^m \mathbb{p}[i_i=1]=m\mathbb{p}[i_1=1].\end{aligned}\ ] ] therefore , &=&\sum_{k=1}^\infty\left[1-\left(1-\frac{p}{m}\right)^k\right]\mathbb{p}[y = k ] \nonumber \\ & = & 1-\frac{1}{\lambda } \sum_{k=1}^\infty \left(1-\frac{p}{m}\right)^k b\left(1+\frac{1}{\lambda},k\right ) \end{aligned}\ ] ] where the last equality is obtained by ( [ e1 : lemaaux1 ] ) . substituting ( [ e3 : lemaaux1 ] ) in ( [ e2 : lemaaux1 ] ) we obtain the desired result .+ observe that &=&\mathbb{p}[i_1=1,\ldots , i_m=1]\nonumber\\ & = & 1-\mathbb{p}[i_i=0 \text { for some } i \in { 1,\ldots m } ] \nonumber\\ & \geq & 1-\sum_{i=1}^m \mathbb{p}[i_i=0 ] \nonumber\\ & = & 1-m \mathbb{p}[i_1=0 ] \nonumber\\ & = & 1-\frac{m}{\lambda } \sum_{k=1}^\infty \left(1-\frac{p}{m}\right)^k b\left(1+\frac{1}{\lambda},k\right ) \nonumber\\ & \geq & 1-\frac{m}{\lambda } \sum_{k=1}^\infty \left(1-\frac{p}{m}\right)^k b\left(1,k\right ) \end{aligned}\ ] ] letting in ( [ e4 : lemaaux1 ] ) we obtain the result .consider starting with one colony at the origin and let this colony we call the -th generation .upon collapse of that colony a random number of new colonies are created .denote this random number by .these are the first generation colonies .every first generation colony gives birth ( at different random times ) to a random number of new colonies .these new colonies are the second generation colonies and their total number is denoted by .more generally , let , if then , if then is the total number of colonies created by the previous generation colonies . from the monotonic property of it suffices to show local extinction for the process starting with one individual at each vertex of .we will actually prove local extinction for the process starting with one individual at each vertex of fix a vertex .if at time there exists a colony at vertex ( for the process ) then it must descend from a colony present at time .assume that the colony at descends from a colony at some site .let be the number of colonies at the -th generation of the colony that started at .the process has the same distribution as the process defined above . in order for a descendent of to eventually reach process must have survived for at least generations .this is so because each generation gives birth only on nearest neighbors vertices .the process is a galton - watson process with and mean offspring the borel - cantelli lemma shows that almost surely there are only finitely many s such that descendents from eventually reach .from we know that a process starting from a finite number of individuals dies out almost surely . hence ,after a finite random time there will be no colony at vertex .we start by giving an informal construction of the process .we put a poisson process with rate 1 at every site of .all the poisson processes are independent . at the poisson process jump timesthe colony at the site collapses if there is a colony .if not , nothing happens .we start the process with finitely many colonies .each colony starts with a single individual and is associated to a yule process with birth rate . at collapse time , given that the colony at site has individuals we have a binomial random variable with parameters .the binomial gives the number of potential survivors . if then each survivor attempts to found a new colony on with probability or on also with probability .the attempt succeeds on an empty site .we associate a new yule process to a new colony , independently of everything else .we declare wet if starting with half - full at time then and are also half - full at time .moreover , we want the last event to happen using only the poisson and yule processes inside the box . that is, we consider the process _ restricted _ to . let be the event that every collapse in the finite space - time box is followed by at least one attempted colonization on the left and one on the right of the collapsed site .we claim that for every , we can pick , and large enough so that we now give the outline of why this is true . since collapse times are given by rate one poisson processes on each site of the total number of collapses inside is bounded above by a poisson distribution with rate . hence , with high probabilitythere are less than collapses inside for large enough .we also take large enough so that at every collapsing time the colony will have so many individuals that attempted colonizations to the left and right will be almost certain ( see lemma [ l : lemaaux1] ) . since the number of collapses can be bounded with high probability the probability of the event can be made arbitrarily close to 1 . at time 0 we start the process with the interval half - full .let and be respectively the leftmost and rightmost occupied sites at time . conditioned on the event it is easy to see that the interval ] contains and , both of these intervals are half - full .hence , for any we can pick and large enough so that the preceding construction gives a coupling between our colonization and collapse model and an oriented percolation model on .the oriented percolation model is 1-dependent and it is well known that for small enough will be in an infinite wet cluster which contains infinitely many vertices like , see durrett .that fact corresponds , by the coupling , to local survival in the colonization and collapse model . note that the proof was done for the process restricted to the boxes .however , if this model survives then so does the unrestricted model .this is so because the model is attractive and more births can only help survival .+ consider now with .first observe that from the case we have a sufficient condition for local survival for the process restricted to a fixed line of .since the model is attractive this is enough to show local survival on .this is so because the unrestricted model will bring more births ( but not more deaths ) to the sites on the fixed line .linear birth / immigration - death process with binomial catastrophes ._ www.eurandom.tue.nl/reports/2015/006-report.pdf _ and _ to appear in probability in the engineering and informational sciences _
|
many species live in colonies that thrive for a while and then collapse . upon collapse very few individuals survive . the survivors start new colonies at other sites that thrive until they collapse , and so on . we introduce a spatial stochastic process on graphs for modeling such population dynamic . we obtain conditions for the population to get extinct or to survive .
|
systems of many coupled dynamical units are of great interest in a wide variety of scientific fields including physics , chemistry and biology . in this paper, we are interested in the case of _ global coupling _ in which each element is coupled to all others . beginning with the work of kuramoto and winfree ,there has been much research on synchrony in systems of globally coupled limit cycle oscillators ( e.g. , ) .possible applications include groups of chirping crickets , flashing fireflies , josephson junction arrays , semiconductor laser arrays , and cardiac pacemaker cells .recently , pikovsky , et al . and sakaguchi have studied the onset of synchronization in systems of globally coupled _ chaotic _ systems . in this paperwe present and apply a formal analysis of the stability of the unsynchronized state ( or `` incoherent state '' ) of a general system of globally coupled heterogeneous , continuous - time dynamical systems . in our treatment ,no _ a priori _ assumption about the dynamics of the individual coupled elements is made .thus the systems can consist of elements whose natural uncoupled dynamics is chaotic or periodic , including the case where both types of elements are present .our treatment is related to the marginal stability investigation of ref . ; see also .the main difference between our work and these previous works is that we treat an ensemble of nonidentical systems , considering both chaotic and limit cycle dynamics , and that our work yields growth rates as well as instability conditions . in addition, our treatment addresses some basic issues of the linear theory ( e.g. , analytic continuation of the dispersion function ) .the organization of the rest of this paper is as follows .the problem is formulated in sec .[ formulation ] , and a formal solution for the dispersion relation is given in sec .[ stabilityanalysis ] . herethe quantity governs the stability of the system ( implies instability ) .the interpretation , analytic properties , and numerical calculation of the dispersion relation are discussed in sec .[ discussion ] along with other issues related to the treatment given in sec .[ stabilityanalysis ] . in sec .[ kuramoto ] , we obtain for the kuramoto model of coupled limit cycle oscillators as an example .section [ numericalexperiments ] presents illustrative numerical examples using three different ensembles of globally coupled lorenz equations .in particular , these ensembles are formed of systems with a parameter that is uniformly distributed in an interval ] includes a pitchfork bifurcation . the second example ( sec . [numexpts : chaotic ] ) is for an apparently chaotic ensemble , while the third example ( sec .[ numexpts : mixed ] ) involves an ensemble that includes both chaotic and periodic elements .finally , sec .[ further ] concludes the paper with further discussion and a summary of the results .we first treat the simplest case , giving generalizations later in the paper ( sec . [discussion : generalizations ] ) .we consider dynamical systems of the form where is a -dimensional vector ; is a -dimensional vector function ; is a constant coupling matrix ; is an index labeling components in the ensemble of coupled systems ( in our analytical work we take the limit , while in our numerical work is finite ) ; is the instantaneous average component state ( also referred to as the _ order parameter _ ) , and , for each , is the average of over an infinite number of initial conditions , distributed according to some chosen initial distribution on the attractor of the uncoupled system is a parameter vector specifying the uncoupled dynamics , and is the _ natural measure _ and average of the state of the uncoupled system . that is , to compute , we set , compute the solutions to eq .( [ ithuncoupled ] ) , and obtain from [ xstar ] .% \label{xstara}\ ] ] in what follows we assume that the are randomly chosen from a smooth probability density function . thus , an alternate means of expressing ( [ xstar]a ) is where is the natural invariant measure for the system . by construction, is a solution of the globally coupled system ( [ first ] ) .we call this solution the `` incoherent state '' because the coupling term cancels and the individual oscillators do not affect each other .the question we address is whether the incoherent state is stable . in particular , as a system parameter such as the coupling strength varies , the onset of instability of the incoherent state signals the start of coherent , synchronous behavior of the ensemble .to perform the stability analysis , we assume that the system is in the incoherent state , so that at any fixed time , and for each , is distributed according to the natural measure .we then perturb the orbits , where is an infinitesimal perturbation : where introducing the fundamental matrix for system ( [ perturb ] ) , where , we can write the solution of eq .( [ perturb ] ) as where we use the notation to signify that is evaluated at time .note that , through eq .( [ variationaleq ] ) , depends on the unperturbed orbits of the uncoupled nonlinear system ( [ ithuncoupled ] ) , which are determined by their initial conditions ( distributed according to the natural measure ) . assuming that the perturbed order parameter evolves exponentially in time ( i.e. , ) , eq .( [ intmm ] ) yields where is complex , and thus the dispersion function determining is in order for eqs .( [ dispeq_nodet ] ) and ( [ dispersioneq ] ) to make sense , the right side of eq .( [ mtilde ] ) must be independent of time .as written , it may not be clear that this is so .we now demonstrate this , and express in a more convenient form . to do this, we make the dependence of in eq .( [ mtilde ] ) on the initial condition explicit : . from the definition of , we have where we have introduced . using eq .( [ mminv ] ) in eq .( [ mtilde ] ) we have note that our solution requires that the integral in the above converge . since the growth of with increasing is dominated by , the largest lyapunov exponent for the orbit , we require in contrast with the chaotic case where , an ensemble of periodic attractors has ( for an attracting periodic orbit corresponds to orbit perturbations along the flow ) . withthe condition , the integral converges exponentially and uniformly in the quantities over which we average .thus we can interchange the integration and the average , in eq .( [ mtilde_avginside ] ) the only dependence on is through the initial condition .however , since the quantity within angle brackets includes not only an average over , but also an average over initial conditions with respect to the natural measure of each uncoupled attractor , the time invariance of the natural measure ensures that eq .( [ mtilde_avginside ] ) is independent of .in particular , invariance of a measure means that if an infinite cloud of initial conditions is distributed on uncoupled attractor at according to its natural invariant measure , then the distribution of the orbits , as they evolve to any time via the uncoupled dynamics ( eq . ( [ ithuncoupled ] ) ) , continues to give the same distribution as at time .hence , although depends on , when we average over initial conditions , the result is independent of for each .thus we drop the dependence of on the initial values of the and write where , for convenience we have also dropped the subscript .thus is the laplace transform of . this result for be analytically continued into , as explained in sec .[ discussion : analytic ] .note that depends only on the solution of the linearized _ uncoupled _ system ( eq .( [ variationaleq ] ) ) .hence the utility of the dispersion function given by eq .( [ dispersioneq ] ) is that it determines the linearized dynamics of the globally coupled system in terms of those of the individual uncoupled systems .consider the column of , which we denote ] as follows .assume that for each of the uncoupled systems in eq .( [ ithuncoupled ] ) , we have a cloud of an infinite number of initial conditions sprinkled randomly according to the natural measure on the uncoupled attractor .then , at , we apply an equal infinitesimal displacement in the direction to each orbit in the cloud .that is , we replace by , where is a unit vector in -space in the direction . since the particle cloud is displaced from the attractor , it relaxes back to the attractor as time evolves .the quantity \delta _ k ] , the consistency condition then gives . setting the real and imaginary parts of this equation equal to zero determines the value of the frequency at instability onset and the critical value of the coupling constant .much previous work has treated the kuramoto problem and its various generalizations using a kinetic equation approach .we have also obtained our main result , eq .( [ dispersioneq ] ) for , by this more traditional method .we briefly outline the procedure below .let be the distribution function ( actually a generalized function ) such that is the fraction of oscillators at time whose state and parameter vectors lie in the infinitesimal volume centered at .note that is time independent , since it is equal to the distribution function of the oscillator parameter vector .the time evolution of is simply obtained from the conservation of probability following the system evolution , =0 , \label{distfunappr}\ ] ] where and , in which is the density corresponding to the natural invariant measure of the uncoupled attractor whose parameter vector is .thus , which is a generalized function , formally satisfies =0.\ ] ] hence , is a time - independent solution of eq .( [ distfunappr ] ) ( the `` incoherent solution '' ) .we examine the stability of the incoherent solution by linearly perturbing , , to obtain =0 \label{distfunappr_2}\ ] ] we can then introduce the laplace transform , solve the transformed version of eq .( [ distfunappr_2 ] ) , and substitute into eq .( [ avgdxasints ] ) to obtain the same dispersion function as in sec .[ stabilityanalysis ] .the calculation is somewhat lengthy , involving the formal solution of eq .( [ distfunappr_2 ] ) by integration along the orbits of the uncoupled system .we will not present the detailed steps here , since the result is the same as that derived in sec .[ stabilityanalysis ] , where it is obtained in what we believe is a more direct manner .the distribution function approach outlined above is similar to the marginal stability treatment of ref . for identical globally chaotic maps . in that case , the frobenius - perron equation plays the role of eq .( [ distfunappr ] ) , and the average over parameters is not present .we note that the computation outlined above is formal in that we treat the distribution functions as if they were ordinary , as opposed to generalized , functions . in this regard, we note that is often extremely singular both in its dependence on ( because the measure on a chaotic attractor is typically a multifractal ) and on ( because chaotic attractors are often structurally unstable ) .we believe that both these sources of singularity are sufficiently mitigated by the regularizing effect of the averaging process over , and that our stability results of sec .[ stabilityanalysis ] are still valid .this remains a problem for future study .we note , however , that for structurally unstable attractors , a smooth distribution of system parameters is likely to be much less problematic than the case of identical ensemble components , . in the case of identical structurally unstable chaotic components ,an arbitrarily small change of can change the character of the base state whose stability is being examined .in contrast , a small change of a smooth distribution results in a small change in the weighting of the ensemble members , but would seem not to cause any qualitative change .we remark that the numerical test cases we study in sec .[ numericalexperiments ] are all structurally unstable . nevertheless , they all agree well with the theory . it is natural to ask what happens as a parameter of the system passes from values corresponding to stability to values corresponding to instability .noting that the incoherent state represents a time independent solution of eq .( [ first ] ) , we can seek intuition from standard results on the generic bifurcations of a fixed point of a system of ordinary differential equations ( ; see also ) .there are two linear means by which such a fixed point can become unstable : ( i ) a real solution of can pass from negative to positive values , and ( ii ) two complex conjugate solutions , and , can cross the imaginary -axis , moving from to . in reference to case ( i ) , we note that the incoherent steady state always exists for our formulation in sec .[ formulation ] .in this situation , in the absence of a system symmetry , the generic bifurcation of the system is a transcritical bifurcation ( fig .[ bif_figure](a ) ) . in the presence of symmetry, the existence of a fixed point solution with nonzero may imply the simultaneous existence of a second fixed point solution with nonzero , where these solutions map to each other under the symmetry transformation of the system . in this casethe transcritical bifurcation is ruled out , and the generic bifurcation is the pitchfork bifurcation , which can be either subcritical ( fig . [ bif_figure](b ) ) or supercritical ( fig . [ bif_figure](c ) ) . in case ( ii ) ,where two complex conjugate solutions cross the axis , the generic bifurcations are the subcritical and supercritical hopf bifurcations .( in this case we note that although the individual oscillators may be behaving chaotically , their average coherent behavior is periodic . ) in our numerical experiments in sec .[ numericalexperiments ] we find cases of apparent subcritical and supercritical hopf bifurcations , as well as a case of what we believe is a subcritical pitchfork bifurcation .the reason we believe it is a pitchfork rather than a transcritical bifurcation is that our globally coupled system is a collection of coupled lorenz equations . since the lorenz equations have the symmetry , and since the form of the coupling used in sec .[ numericalexperiments ] respects this symmetry , the transcritical bifurcation is ruled out .one generalization is to consider a general nonlinear form of the coupling such that we replace system ( [ first ] ) by and the role of the uncoupled system ( analogous to eq .( [ ithuncoupled ] ) ) is played by the equation in this more general setting , following the steps of sec .[ stabilityanalysis ] yields where a still more general form of the coupling is for eqs .( [ firstgen ] ) and ( [ first ] ) , a unique incoherent solution always exists and follows from eq .( [ xstar ] ) by solving the nonlinear equations for each with set equal to zero . in the case of eq .( [ gentwo ] ) , the existence of a unique incoherent state is not assured . by definition , is time independent in an incoherent state . thus replacing in eq .( [ gentwo ] ) by a constant vector ,imagine that we solve eq .( [ gentwo ] ) for an infinite number of initial conditions distributed for each on the natural invariant measure of the system , , and then use eq .( [ xstar ] ) to obtain the average .this average depends on , so that .we then define an incoherent solution for eq .( [ gentwo ] ) by setting , so that is the solution of the nonlinear equation generically , such a nonlinear equation may have multiple solutions or no solution . in thissetting , if a stable solution of this equation exists for some paramter , then the solution of the nonlinear system ( [ gentwo ] ) ( with appropriate initial conditions ) will approach it for large .if now , as approaches from below , a real eigenvalue approaches zero , then generically corresponds to a saddle - node bifurcation .that is , an unstable incoherent solution merges with the stable incoherent solution , and , for , neither exist . in this case , loss of stability by the hopf bifurcation is , of course , still generic , and the incoherent solution continues to exist before and after the hopf bifurcation . for eq .( [ gentwo ] ) is given by eq .( [ gendisp ] ) with replaced by evaluated at the incoherent state whose stability is being investigated .another interesting case is when the coupling is delayed by some linear deterministic process .that is , the ith oscillator does not sense immediately , but rather responds to the time history of .thus , using eq .( [ firstgen ] ) as an example , the coupling term is replaced by a convolution , in this case a simple analysis shows that eq .( [ gendisp ] ) is replaced by where the simplest form of this would be a discrete delay in which case .( the case of time delayed interaction has been studied for coupled limit cycle oscillators in refs . . )in addition to these generalizations , others are also of interest .for example , the inclusion of noise and coupling `` inertia '' is studied in the limit cycle case in ref .as an example , we now consider a case that reduces to the well - studied kuramoto problem .we consider the ensemble members to be two dimensional , , and characterized by a scalar parameter .for the coupling matrix we choose .thus eq .( [ first ] ) becomes , .we assume that in polar coordinates , the uncoupled dynamical system is given by where . that is , the attractor is the circle , and it attracts orbits on a time scale that is very short compared to the limit cycle period . for it will suffice to calculate for . to do this , as shown in fig .[ kuramoto_figure ] , we consider an initial infinitesimal orbit displacement where are unit vectors . in a short timethis displacement relaxes back to the circle , so that for we have , , , where is the initial value , is evaluated at , and . for later time , we have , and , with evaluated at . in rectangular coordinates this is = \bigg [ \begin{array}{ll } \sin ( \theta _ { oi}+\omega _ it)\sin \theta _ { io } & -\sin ( \theta_{oi}+\omega _ it)\cos \theta _ { io } \nonumber \\[1ex]-\cos ( \theta _ { oi}+\omega _ it)\sin \theta _ { oi } & \cos ( \theta _ { oi}+\omega _ it)\cos \theta _ { oi}\end{array}\bigg ] \bigg [ \begin{array}{c } dx_{oi } \\dy_{oi}\end{array}\bigg ] .\label{bigkura}\ ] ] by definition , the above matrix is appearing in sec .[ stabilityanalysis ] .averaging eq .( [ bigkura ] ) over the invariant measure on the attractor of eqs .( [ anglekura ] ) and ( [ radiuskura ] ) implies averaging over .this yields .\ ] ] averaging the rotation frequencies over the distribution function and taking the laplace transform gives , , \label{mtildekura}\ ] ] where and , in doing the laplace transform , we have neglected the contribution to the laplace integral from the short time interval ( this contribution approaches zero as ) . using eqs .( [ mtildekura ] ) and ( [ qpm ] ) in eq .( [ dispersioneq ] ) then gives , where is the well - known result for the kuramoto model ( e.g. , ) , and for is obtained by analytic continuation .note that the property , where denotes complex conjugation , insures that complex roots of come in conjugate pairs .in this section , we illustrate and test our theoretical results using three different ensembles of globally coupled heterogeneous lorenz oscillators .the lorenz equations are given in eq .( [ lorenz ] ) . for our numerical experiments ,we set and , and the ensembles are formed of systems with the parameter uniformly distributed in an interval ] , can be solved for ( note that there may be multiple roots ) .the real part then yields the corresponding critical coupling ^{-1} ] crosses zero more than once , and each root corresponds to a possible solution for .note that the maxima of ] for which =0 ] , where the peaks were not well resolved .attempts to improve the frequency resolution of the laplace transform requires a calculation of for longer time .however , fluctuations due to the finite number of ensemble elements prevent the accurate calculation of the decay of to zero for large times .thus , must be increased , and practical considerations limit the usefulness of this method ( although we note that for this example , the method does yield good values for and ) .similarly , an accurate measurement of the growth rate of the mean field requires very large ensembles and extremely long transients due to the weak phase mixing , and again we found this calculation to be numerically impractical .thus , we demonstrate our growth rate calculations only in the computationally more feasible cases considered below , i.e. the chaotic and mixed ensembles .we now consider an ensemble of lorenz equations with and . from the bifurcation diagram ( see fig .[ fig_chaosbif ] ) , the ensemble seems to consist of predominantly chaotic systems . within this range of parameters, the positive lyapunov exponent varies between approximately and .once again , we examined the destabilization of the ensemble s incoherent state by plotting as a function of .one can see in fig .[ fig_chaosorder ] that this chaotic ensemble has a hysteretic transition at . on the positive side ,the incoherent state is stable up to the largest value tested ( ) .examining the temporal dependence of the instantaneous order parameter near the transition at , we find that the order parameter jumps to one of two stable fixed points on opposite lobes of the lorenz attractor ( see fig .[ lorenzlobes ] ) . as we have discussed previously ( sec .[ discussion : bifurcations ] ) , this subcritical transition is expected to be a pitchfork bifurcation rather than a transcritical bifurcation due to the intrinsic symmetry of the lorenz equations .we calculated as a function of by examining the uncoupled ensemble under periodic perturbation .for this case , we chose the forcing strength to be and .( we varied the value of by an order of magnitude from 0.5 to 5 and the result does not seem to change significantly ; this indicates that the perturbation is sufficiently linear . )figure [ fig_chaosm11 ] shows the real and the imaginary parts of versus for this case . as compared with the previous example , the frequency response curve is simpler . ] has a prominent peak . using eq .( [ kstar ] ) , this gives a critical coupling value of .this result agrees well with the threshold of instability for the incoherent state observed in the globally coupled ensemble .we have also attempted to calculate for this ( chaotic ) ensemble using the other two methods described in sec .[ discussion : numerically ] .these are : the linear method , in which the linearized equation [ eq . ( [ variationaleq ] ) ] is solved for and the result is averaged , and the impulse - response method , in which the orbits on the attractor are displaced by a small amount in the direction and the rate of decay back to zero is measured .the results from these methods are included in fig .[ fig_chaosm11 ] with filled and open circles , respectively .while all methods agree reasonably well for , the important narrow peak at is missing from the results of both the linear and the impulse - response methods .this discrepancy can be understood by observing that the peak at represents long - time dynamics .in particular , the half - width of this peak has , corresponding to a decay time scale of .in contrast , the spectrum , with this peak deleted , has a half - width of , corresponding to a much shorter time scale of approximately .the linear and impulse - response methods apparently resolve the short time scale well , but fail to adequately resolve the longer time scale .this is due to the problem that we have discussed in sec .[ discussion : numerically ] . for the linear method ,the individual grow exponentially in time , and hence the ensemble average requires a delicate canceling in order to remain valid for large time .figure [ fig_chaosm11 t ] shows a graph of for the linear method in grey . initially decays exponentially , but for , it begins to grow as the balanced canceling breaks down due to the fact that only a finite number of elements is used in the calculation .thus , when obtaining the laplace transform , we only integrated over the reliable range , i.e. .this had the effect of leaving out the slower decay , which is vital for determining the critical coupling strength for the onset of instability in this case .in contrast , when is measured using the impulse response method , it does not ultimately diverge exponentially .however , its exponential decay is masked by fluctuations for , again due to finite ; see the black curve in fig .[ fig_chaosm11 t ] .we found that the frequency response method is more reliable because the temporal averaging effectively reduces statistical noise .therefore , we were able to obtain a good estimate of with only a moderate number of oscillators .the cost for these improved statistics is that each calculation is for only one particular value of .this is in contrast to the impulse response method , in which the laplace transform of gives the entire dependence of on at once .some of the comparative advantages and drawbacks among the three numerical methods in calculating can be clearly seen in this example .the growth rate of the incoherent state , when it first becomes unstable , can be estimated from / \partial \omega ] occur very near , but not exactly at the roots of =0 ] near the two biggest peaks give and .the predicted transition frequency associated with the supercritical transition at is approximately .these predictions agree well with the observed quantities obtained using the fully nonlinear , globally coupled ensemble .we have also compared the actual growth rate obtained from the globally coupled ensemble with its predicted value calculated from using the same procedure described above .figure [ fig_mixgamma ] is a graph of vs. for the transition at . the predicted slope , calculated using the frequency response method using eq .( [ freqresponse ] ) , is ; this agrees well with the measured growth rates .we have presented a general formulation for the determination of the stability of the incoherent state of a globally coupled system of continuous time dynamical systems .this formulation gives the dispersion function in terms of a matrix which specifies the laplace transform of the time evolution of the centroid of the _ uncoupled _( ) ensemble to an infinitesimal displacement .thus the stability of the coupled system is determined by properties of the collection of individual uncoupled elements .the formulation is valid for any type of dynamical behavior of the uncoupled elements .thus ensembles whose members are periodic , chaotic , or a mixture of both can be treated .we discuss the analytic properties of and its numerical determination .we find that these are connected : analytic continuation of to the axis is necessary for the application of the analysis , but , in the chaotic case ( as discussed in secs .[ discussion : mtilde ] and [ numericalexperiments ] ) leads to numerical difficulties in obtaining .we illustrate our theory by application to the kuramoto problem and by application to three different ensembles of globally coupled lorenz systems .in particular , our lorenz ensembles include a case where all of the uncoupled ensemble members are periodic with a pitchfork bifurcation of the uncoupled lorenz equations within the parameter range of the ensemble , a case where all the ensemble members appear to be chaotic , and a case where the parameter range of the ensemble yields chaos with a window of periodic behavior .these numerical experiments illustrate the validity of our approach , as well as the practical limitations to numerical application .y. kuramoto , in _ international symposium on mathematical problems in theoretical physics _ , edited by h. araki , lecture notes in physics , vol .39 ( springer , berlin , 1975 ) ; _ chemical oscillators , waves and turbulence _( springer , berlin , 1984 ) . e. ott , _ chaos in dynamical systems _ ( cambridge univ .press , 1993 ) , chapter 3 . for a given , the natural measure for an attractor of the uncoupled system the fraction of time that a _ typical _ infinitely long orbit originating in ( the basin of attraction of ) spends in a subset of state space . by the word typical we refer to the supposition that there is a set of initial conditions in where this set has lebesgue measure ( roughly volume ) equal to the lebesgue measure of and such that each initial condition in this set gives the same value ( i.e. , the natural measure ) for the fraction of time spent in by the resulting orbit .another possible technique that appears to be attractive for ensembles of units whose uncoupled dynamics is chaotic is based on the unstable periodic orbits embedded in the chaotic attractors in conjunction with cycle expansions .this approach is currently under investigation .
|
a general stability analysis is presented for the determination of the transition from incoherent to coherent behavior in an ensemble of globally coupled , heterogeneous , continuous - time dynamical systems . the formalism allows for the simultaneous presence of ensemble members exhibiting chaotic and periodic behavior , and , in a special case , yields the kuramoto model for globally coupled periodic oscillators described by a phase . numerical experiments using different types of ensembles of lorenz equations with a distribution of parameters are presented .
|
the study of networks ( i.e. graph theory ) is the study of the relationship between vertices ( i.e. nodes ) as defined by the edges ( i.e. arcs ) connecting them . in path analysis ,a path metric function maps an ordered vertex pair into a real number , where that real number is the length of the path connecting to the two vertices .metrics that utilize the shortest path between two vertices in their calculation are called geodesic metrics .the geodesic metrics that will be reviewed in this article are shortest path , eccentricity , radius , diameter , betweenness centrality , and closeness centrality . if is a single - relational network , then , where is the set of vertices and is a subset of the product of . in a single - relational networkall the edges have a single , homogenous meaning .because an edge in a single - relational network is an element of the product of , it does not have the ability to represent the type of relationships that exist between the two vertices it connects .an edge can only denote that there is a relationship . without a distinguishing label ,all edges in such networks have a single meaning .thus , they are called single - relational networks .is the union of two disjoint vertex sets .thus , edges from set to set ( such that ) can have a different meaning than the edges from to .also , theoretically , it is possible to represent edge labels as a topological feature of the graph structure . in other words, there exists an injective function ( though not surjective ) from the set of semantic networks to the set of single - relational networks that preserves the meaning of the edge labels . ]while a single - relational network supports the representation of a homogeneous set of relationships , a semantic network supports the representation of a heterogeneous set of relationships .for instance , in a single - relational network it is possible to represent humans connected to one another by friendship edges ; in a semantic network , it is possible to represent humans connected to one another by friendship , kinship , collaboration , communication , etc . relationships . a semantic network denoted can be defined as a set of single - relational networks such that , where and for any , .the meaning of a relationship in is determined by its set . perhaps a more convenient semantic network representation andthe one to be used throughout the remainder of this article is that of the triple list where and is a set of edge labels . a single edge in this representationis denoted by a triple , where vertex is connected to vertex by the edge label . in some cases , it is possible to isolate sub - networks of a semantic network and represent the isolated network in an unlabeled form .unlabeled geodesic metrics can be used to compute on the isolated component . however , in many cases , the complexity of the path description does not support an unlabeled representation .these scenarios require semantically aware " geodesic metrics that respect a semantic network s ontology ( i.e. the vertex classes and edge types ) .a semantic network is not simply a directed labeled network .it is a high - level representation of complex objects and their relationship to one another according to ontological constraints .there exist various algorithms to study semantically typed paths in a network . such algorithms assume only a path between two vertices and do not investigate other features of the intervening vertices .the benefit of the grammar - based geodesic model presented in this article is that complex paths can be represented to make use of path bookkeeping . "such bookkeeping investigates intervening vertices even though they may not be included in the final path solution .for example , it may be important to determine a set of friendship " paths between two human vertices , where every intervening human works for a particular organization and has a particular position in that organization .while a set of friendship paths is the result of the function , the path detours to determine employer and position are not . the technique for doingthis is the primary contribution of this article .a secondary contribution is the unification of the grammar - based model proposed here with the grammar - based model proposed in for calculating stationary probability distributions in a subset of the full semantic network ( e.g. eigenvector centrality and pagerank ) . with the grammar - based model ,a single framework exists that ports many of the popular single - relational network analysis algorithms to the semantic network domain .moreover , an algebra for mapping semantic networks to single - relational networks has been presented in and can be used to meaningfully execute standard single - relational network analysis algorithms on distortions of the original semantic network .the semantic web community does not often employee the standard suite of network analysis algorithms .this is perhaps due to the fact that the semantic web is generally seen as a knowledge - base grounded in description logics rather than graph- or network - theory .when the semantic web community adopts a network interpretation , it can benefit from the extensive body of work found in the network analysis literature .for example , recommendation , ranking , and decision making are a few of the types of semantic web applications that can benefit from a network perspective . in other words, graph / network theoretic techniques can be used to yield innovative solutions on the semantic web .the first half of this article will define a popular set of geodesic metrics for single - relational networks .it will become apparent from these definitions , that the more advanced geodesics rely on the shortest path metric .the second half of the article will present the grammar - based model for calculating a meaningful shortest path in a semantic network .the other geodesics follow from this definition .this section will review a collection of popular geodesic metrics used to characterize a path , a vertex , and a network .the following list enumerates these metrics and identifies whether they are path , vertex , or network metrics : * in- and out - degree : vertex metric * shortest path : path metric * eccentricity : vertex metric * radius : network metric * diameter : network metric * closeness : vertex metric * betweenness : vertex metric .it is worth noting that besides in- and out - degree , all the metrics mentioned utilize a path function to determine the set of paths between any two vertices in , where is a set of paths .the premise of this article is that once a path function is defined for a semantic network , then all of the other metrics are directly derived from it . in the semantic network path function , returns the number of paths between two vertices according to a user - defined grammar . before discussing the grammar - based geodesic model for semantic networks , this section will review the geodesic metrics in the domain of single - relational networks .the simplest structural metric for a vertex is the vertex s degree .while this is not a geodesic metric , it is presented as the concept will become necessary in the later section regarding semantic networks . for directed networks , any vertex has both an in - degree and an out - degree .the set of edges in that have as either its in- or out - edge is denoted and , respectively . if and then , is the subset of edges in incoming to and is the subset of edges outgoing from .the cardinality of the sets is the in- and out - degree of the vertex , denoted and , respectively . the shortest path metric is the foundation for all other geodesic metrics .this metric is defined for any two vertices such that the sink vertex is reachable from the source vertex in .if is unreachable from , the shortest path between and is undefined .the shortest path between any two vertices and in an unweighted network is the smallest of the set of all paths between and . if is a function that takes two vertices and returns a set of paths where for any , , then the shortest path between and is the , where returns the smallest value of its domain .the shortest path function is denoted with the function rule it is important to subtract from the path length since a path is defined as the set of edges traversed , not the set of vertices traversed .thus , for the path , the is , but the path length is .note that returns the set of all paths between and .of course , with the potential for loops , this function could return a .therefore , in many cases , it is important to not consider all paths , but just those paths that have the same cardinality as the shortest path currently found and thus are shortest paths themselves .it is noted that all the remaining geodesic metrics require only the shortest path between and .the radius and diameter of a network require the determination of the eccentricity of every vertex in .the eccentricity metric requires the calculation of shortest path calculations of a particular vertex .the eccentricity of a vertex is the largest shortest path between and all other vertices in such that the eccentricity function has the rule where returns the largest value of its domain .the radius of the network is the minimum eccentricity of all vertices in .the function has the rule finally , the diameter of a network is the maximum eccentricity of the vertices in .the function has the rule closeness and betweenness centrality are popular network metrics for determining the centralness " of a vertex .closeness centrality is defined as the mean shortest path between some vertex and all the other vertices in .the function denotes the closeness function and has the rule betweenness centrality is defined for a vertex in .the betweenness of is the number of shortest paths that exist between all vertices and that have in their path divided by the total number of shortest paths between and , where . if is a function that returns the set of shortest paths between any two vertices and such that and is the set of shortest paths between two vertices and that have in the path , where then the betweenness function has the rule it is worth noting that in , the author articulates the point that the shortest paths between two vertices is not necessarily the only mechanism of interaction between two vertices .thus , the author develops a variation of the betweenness metric that favors shortest paths , but does not utilize only shortest paths in its betweenness calculation .a semantic network is a directed labeled graph . however , a semantic network is perhaps best interpreted in an object - oriented fashion where complex objects ( i.e. multi - vertex elements ) are connected to one another according to various relationship types . while a particular human is represented by a vertex , metadata associated with that individualis represented in the vertices adjacent to the human vertex ( e.g. the human s name , address , age , etc . ) . in many instances , particular metadata vertices are sinks ( i.e. no outgoing edges ) . in other cases ,the metadata of an individual is another complex object such as the friend of that human or the human s employer .the topological features of a semantic network are represented by a data type abstraction called an ontology ( i.e. a semantic network schema ) .a popular semantic network representation is the resource description framework ( rdf ) .rdf schema ( rdfs ) is a schema language for developing rdf ontologies in rdf .this article will present all of its concepts from the perspective of rdf and rdfs primarily due to the fact that these are standard data models with a large application - base .however , these ideas can be generalized to any semantic network representation .this is due to the fact that one can remove the constraint of using uris , literals , and blank nodes when labeling vertices and edges .when such a constraint is lifted , then a directed , vertex / edge - labeled , multi - graph results . in the semantic network literature ,such an abstract graph type is named a semantic network .the first subsection will briefly introduce the concept of rdf and rdfs before describing an ontology for designing geodesic grammars .the rdf data model represents a semantic network as a triple list where the vertices and edges ( both called resources ) are uniform resource identifiers ( uri ) , blank nodes , or literals .if the set of all uris is denoted , the set of all blank nodes is denoted , and the set of all literals is denoted , then an rdf network is the triple list such that the first resource of a triple is called the subject , the second is called the predicate , and the third is called the object .a single triple is denoted as .all uris are namespaced such that the uri http://www.lanl.gov#marko has a namespace of http://www.lanl.gov # and a fragment of marko . in many cases , for document and diagram clarity, a namespace is prefixed in such a way that the previous uri is represented as lanl : marko . in this article , the namespaces for rdf and rdfswill be prefixed as rdf and rdfs , respectively .blank nodes are anonymous " vertices and are not discussed in this article as they will not directly pertain to any of the concepts presented .literals are any resource that denotes a string , integer , floating point , date , etc . the full taxonomy of literal types is presented in . in rdfs, every vertex is tied to some platonic category representing its rdfs : class using the rdf : type property .moreover , every edge label has domain / range restrictions that determine the vertex types that the edge labels can be used in conjunction with .because the instance of an ontology obeys the defined constraints of the ontology , the modeler has an abstract representation of the topological features of the semantic network instance in terms of classes ( vertices ) and properties ( edge labels ) .for example , states that any resource of type lanl : human can have a friend that is only of type lanl : human .therefore , the following three triples are legal according to the simple ontology above : however , the three statements are not legal according to the ontology because lanl : fluffy is a lanl : dog and a lanl : human can not befriend anything that is not a lanl : human .the ontology and legal instance of the previous example are diagrammed in figure [ fig : friend - full ] .however , for the sake of brevity and clarity of the diagram , the domain and range properties of a class can be abbreviated as in figure [ fig : friend - abbrev ] .the abbreviated ontological diagram will be used throughout the remainder of this article .it is important to note that both the rdfs ontology and rdf instance network are represented in rdf and thus , both instances and ontology are contained within a single semantic network .the full representation of all triples in the ontology and instance layers of the semantic network example.,scaledwidth=50.0% ] the abbreviated representation of the ontology and instance layers of the semantic network example.,scaledwidth=50.0% ] finally , an important concept in rdfs is rdfs : class and rdf : property subsumption as denoted by the rdfs : subclassof and rdfs : subpropertyof predicates , respectively . with the rdfs : subclassof and rdfs : subpropertyof predicates , it is possible to generate concept hierarchies . for the purposes of this article , it is only necessary to understand that subsumption is transitive such that if then it can be inferred that because lanl : fluffy is a lanl : dog , lanl : fluffy is also both a lanl : mammal and a lanl : animal .transitivity exists for the rdfs : subpropertyof predicate as well. this subsection will define the rdfs ontology for creating a grammar .any user - defined grammar must obey this ontology .the grammar constructed from this ontology determines the meaning of the value returned by a semantically aware " geodesic function .any grammar instance is denoted .the instance of a grammar is represented in rdf and the ontology of the grammar is represented in rdfs .figure [ fig : ont - rwr ] diagrams the ontology of the geodesic grammar , where edges represent properties whose tail is the domain of the property and whose head is the range of the property .furthermore , the dashed edges denote the rdfs property rdfs : subclassof .the ontology for a geodesic path grammar.,scaledwidth=80.0% ] the remainder of this section will present an informal review of the major components of the grammar ontology . the next section will formalize all aspects of the resources diagrammed in figure [ fig : ont - rwr ] .grammar - based geodesics rely on a discrete walker .the walker utilizes a grammar to constrain its path through .the combination of a walker and a is a breadth - first search through a particular sub - network of . that sub - network is abstractly represented by , but not fully realized until after the execution of on .any is a collection of rwr : context resources connected to one another by rwr : traverse resources .each rwr : context is an abstract representation of a legal step along a path that a walker can traverse on its way from source vertex to sink vertex .an rwr : context has an associated rwr : forresource property .the object of that property determines the set of legal vertices that that the rwr : context can resolve to .only when a walker utilizes a grammar do the rwr : contexts have a resolution to a particular vertex in .rwr : context resolution is further constrained by the rwr : rules and rwr : attributes of the rwr : context in .two important data structures that are used in a grammar are the rdf : bag and rdf : seq .an rdf : bag is an unordered set of elements where each element of the rdf : bag is the object of a triple with predicate rdf : li .an rdf : seq is an ordered set of elements where each element of the rdf : seq is the object of a triple with a predicate that is an rdfs : subpropertyof rdfs : containermembershipproperty ( i.e. rdf:_1 , rdf:_2 , rdf:_3 , etc . ) . there exist two rwr : rules ( an rdfs : subclassof rdf : seq ) : rwr : pathcount and rwr : traverse .the rwr : pathcount rule instructs the walker to record the vertex , edge , and directionality in the ordered path set that is ultimately returned by the grammar - based geodesic algorithm .the rwr : traverse rule instructs the walker to select some outgoing or incoming edge of its current vertex as defined by the set of rwr : edges associated with the rwr : traverse rule . if more than one choice should exist for the walker , the walker chooses both by cloning itself and having each clone take a unique branch of the path .there exist three rwr : attributes ( an rdfs : subclassof rdf : bag ) : rwr : notever , rwr : is , and rwr : not . in some instances , when traversing to a new vertex , the walker must respect the fact that it has already seen a particular vertex .the rwr : notever attribute ensures that the resolution of the rwr : context is not a previously seen vertex , thus preventing infinite loops .the rwr : is attribute allows the walker to explore an area around a particular vertex ( i.e. other paths not directly associated with the return path ) while still ensuring that the walker returns to the original vertex .finally , the rwr : not attribute ensures that the walker does not return to a _previously seen vertex .if vertex is the head of the path ( i.e. source ) , then it is defined in an rwr : entrycontext .if vertex is the tail of the path ( i.e. sink ) , then it is defined in an rwr : exitcontext .the purpose of the walker is to move from source to sink in by respecting the rwr : rules and rwr : attributes of the rwr : contexts that it traverses in .figure [ fig : system ] diagrams the relationship between a walker , its grammar , and its network instance .the grammar acts as a user - defined program " that the walker executes , where the language of that program is defined by the grammar ontology .a walker walks both and .,scaledwidth=60.0% ] the next section will formalize the grammar .once a grammar has been defined according to the constraints of the ontology diagrammed in figure [ fig : ont - rwr ] , the path function can be executed . the function returns the set of all paths between any two vertices .this section will define the rules by which interprets its domain parameters and ultimately derives a path set .the grammar - based model requires the walker to query such that it can determine the set of legal vertices and edges that it can traverse .moreover , the walker must be able to query in order to know which rwr : rules and rwr : attributes to respect .the mechanism by which the walker queries and is called the symbol binding model .for example , the following query would fill the unordered set with all people that have lanl : jhw as their friend and who work for lanl : lanl .a more advanced query example is in the above query , the set is an unordered set of ordered pairs of friends where one of the friends works at lanl : lanl and the other works at lanl : pnnl .the path function is supplied with a start vertex , an end vertex , and a grammar . upon the execution of , a single walker , denoted , is created and added to the set of walkers , where at , , and is in discrete time .the set may increase in size over the course of the algorithm as clone particles are created where multiple legal options exist for traversal .every walker has two ordered multi - sets associated with it : and .the multi - set is an ordered set of vertices , edges , and edge directions traversed by , where is the vertex location of at time step .the element denotes the predicate ( i.e. edge label ) used by to traverse to and the element denotes the directionality of the predicate used in that traversal .for example , suppose : marko , lanl : hasfriend , + , lanl : jhw , lanl : hasfriend , + , lanl : norman .in the presented path , , , , , , , and . note that and .the example path is diagrammed in figure [ fig : path - example ] .an example of a path.,scaledwidth=85.0% ] the multi - set is an ordered set of vertices , edges , and directionalities that are recorded by along its path through .the set maintains the same indexing schema of and as .the main distinction between and is that is the returned path , _ not _ the actual path of .if reaches its destination rwr : exitcontext in and thus vertex , then the set is one of the elements in the return set of the path function .thus , for the grammar - based geodesic model , the is necessary to transform the length of into an index in time ( due to the and notation convention ) because the set includes edge labels and edge directionality as well as vertices .the initial walker starts its journey at the rwr : entrycontext in and the vertex in .thus , . as in figure [ fig : ont - rwr ] , the rwr : entrycontext must be the domain of the predicate rwr : forresource whose range is .an rwr : entrycontext must have no rwr : attributes and must have the rule rwr : pathcount such that . from and the rwr :entrycontext in , will move to some new and some new rwr : context in . before discussing the rwr :traverse rule , it is necessary to discuss the attributes that determine the set of legal edges that can be traversed by .the rwr : notever attribute is useful for ensuring that path loops do not occur and thus cause the path algorithm to run indefinitely . if is trying to traverse to a new rwr : context at and that rwr : context has the rwr : notever attribute , then the set is the set of vertices in for which can not legally resolve the rwr : context to .note that the definition of does not include edge labels or edge directionality , only vertices .this is due to the fact that the time index ( ) of are not superscripted with or .the rwr : is attribute guarantees that the vertex resolved to by a particular rwr : context is a vertex seen on a previous step of the walker s . for instance , suppose that a walker must check that a particular individual works for the los alamos national laboratory before traversing a different edge label of lanl : jhw .this problem is diagrammed in figure [ fig : is - example ] .rwr : is can be used to ensure that a walker backtracks.,scaledwidth=40.0% ] in figure [ fig : is - example ] , the walker is at lanl : jhw at time step . at timestep , the walker must check to see if lanl : jhw lanl : worksfor lanl : lanl . to do so, the walker will traverse lanl : worksfor edge .upon validating the lanl : lanl , the walker must return back to lanl : jhw .therefore , the walker will take the inverse of the lanl : worksfor edge ( i.e. oppose the directionality of the edge ) .however , despite the existence of an inverse lanl : worksfor edge to lanl : marko , the walker should not clone itself .therefore , in order to specify that the walker must return to lanl : jhw , it is important to use the rwr : is attribute such that only a single walker returns to lanl : jhw at and is unchanged .the set of all legal vertices that an rwr : context can resolve to is defined by the set , where if is the rwr : context at that maintains an rwr : is attribute , then and the set is the set of legal vertex resources that the rwr : context can resolve to and is used in the calculation of an rwr : traverse at .the rwr : not attribute determines the set of vertices that the rwr : context can not resolve to .this is similar to the set , except that it is for some , not for all in the past .for example , suppose that the walker must only consider an article co - authorship network .this problem is diagrammed in figure [ fig : not - example ] .rwr : not can be used to ensure that a walker does not backtrack.,scaledwidth=47.5% ] in figure [ fig : not - example ] , the walker must determine if the article doi:10.1007/s11192 - 006 - 0176-z has at least 2 co - authors . in order to do so, the walker must not return to lanl : jbollen at .if and then is the set of vertices that the rwr : context must not resolve to and is used in the calculation of an rwr : traverse at .the rwr : traverse rule is perhaps the most important aspect of the grammar .an rwr : traverse rule of an rwr : context determines the next rwr : context that should traverse to in as well as the next .it utilizes the previously defined attribute sets , , and in its calculation .an rwr : traverse rule is composed of a set of rwr : edges that can be either incoming or outgoing .thus , unlike in directed networks , the path of a is not constrained by the directionality of the edges .the functions are defined as and is the rwr : traverse rule of the current rwr : context .therefore , if and then where is the set of legal edges that can traverse given its current location of and location . note that the set has a unique set of elements .if , then halts . unlike the grammar - based eigenvector model of , the geodesic requires the searching of all legal paths . in line with a breadth - first search ,all network branches are checked .thus , for every triple , a clone walker is created and added to .this idea will be made more salient in the example to follow .the rwr : pathcount rule is the mechanism by which values in get appended to , where is the path returned by at the end of the algorithm s execution . the rule instructs to append a path segment in to the ordered multi - set . if a particular rwr : context has the rwr : pathcount rule with the rwr : step such that , then will append , , and to such that none of the elements copied from and they are added in their respective order . the next section will present the aforementioned rules and attributes within the framework of a particular social network ontology in order to demonstrate a practical application .this section will present two examples of the previously presented ideas to the problem of calculating semantically meaningful geodesic functions within a semantic social network .figure [ fig : social - ontology ] presents an rdfs network ontology that will be used throughout the remainder of this section .note that the domain and range of the properties are denoted by the tail and head of the edge , respectively .an example semantic social network ontology.,scaledwidth=50.0% ] figure [ fig : social - instance ] diagrams an example instance that respects the ontological constraints diagrammed in figure [ fig : social - ontology ] .an example semantic social network instance.,scaledwidth=95.0% ] the first example will demonstrate how to determine all the non - recurrent paths between the vertex lanl : johan and lanl : norman such that only friendship paths are taken , but those intervening friend vertices must have a lanl : researcher position .the second example will present a grammar that simulates an unlabeled network path calculation by ignoring vertex types and edge labels .note that the two examples presented are for locating all paths between a source and a sink vertex .this is for demonstration purposes only .if one required only the shortest path , once a path between the source and sink has been found , the algorithm can halt . in unweighted networks , using a breadth - first search algorithm , the first path discovered is always the shortest path .figure [ fig : path1-example ] presents a geodesic grammar that determines the set of all non - recurrent paths between lanl : johan and lanl : norman according to lanl : hasfriend relationships where every friend along the walker s path must be a lanl : researcher . a grammar to determine all non - recurrent lanl : hasfriend paths from lanl : johan to lanl : norman.,scaledwidth=80.0% ]note the diagrammatic conventions used to represent a grammar .every rwr : context , rwr : rule , and rwr : attribute has a _ # after its type .this is to denote that each representation of the same rwr : context , rwr : rule , or rwr : attribute is , in fact , a distinct vertex in .the label of the rwr : context is the object of the rwr : forresource property minus the _ # .furthermore , the dashed contexts are rwr : entrycontexts and the dotted contexts are rwr : exitcontexts .thus , lanl : johan_0 is the source context and lanl : norman_4 is the sink context in , and where lanl : johan is the source vertex and lanl : norman is the sink vertex in .the rwr : rules of an rwr : context are represented in their order of execution from bottom to top .the rwr : attributes are associated , in no particular order , with their respective rwr : context . if a rule or attribute requires a literal rwr : step specification , that literal is appended to its respective rule or attribute .the + or - symbol on the head of an edge denotes whether the rwr : traverse edge is an rwr : outedge or rwr : inedge , respectively . at , and .the first rule to be executed is the rwr : pathcount_0 rule in which will register in such that . after adding lanl : johan to , the walker will execute the rwr : traverse_0 rule .the rwr : traverse_0 rule yields a .if lanl : norman was a friend of lanl : johan , then that edge would have been represented in as well . because , the rwr : notever_1 attribute of the human_1 context has an . at ,the current path of is and the current return path .there exists only one rule at rwr : human_1 .the rwr : traverse_1 rule dictates that take an outgoing edge from lanl : marko to a lanl : researcher position .given that there is only one edge that can be traversed , . at ,the current path of is : johan , lanl : hasfriend , + , lanl : marko , lanl : hasposition , + , lanl : researcher and the current return path .the only rule of the lanl : researcher_2 context is to return the human that was last encountered as specified by the rwr : is_3 attribute of the next lanl : human_3 context .thus , . at ,the current path of is : johan , lanl : hasfriend , + , lanl : marko , lanl : hasposition , + , lanl : researcher , lanl : hasposition , ,lanl : marko .given the rwr : pathcount_3 rule with a rwr : step of , : johan , lanl : hasfriend , + , lanl : marko .the rwr : traverse_3 rule provides a with two edges such that : marko , lanl : hasfriend , lanl : jhw : marko , lanl : hasfriend , lanl : norman . note that the edge : marko , lanl : hasfriend , lanl : johan does not exist in because of the rwr : notever_1 attribute at the lanl : human_1 context ( i.e. ) . because two edges exist in , is cloned such that , , and .the walker will take one edge and will take the other edge . at , will be at lanl : norman in and thus at an rwr : exitcontext in .however , before halts , rwr : pathcount_4 is executed such that : johan , lanl : hasfriend , + , lanl : marko , hasfriend , + , lanl : norman . at the completion of rwr : pathcount_4 there are no other rules to execute and thus halts .the walker , on the other hand , will be at lanl : jhw at .it is not until that arrives at lanl : norman . at , : johan , lanl : hasfriend , + , lanl : marko , lanl : hasfriend , + , lanl : jwh , lanl : hasfriend , + , lanl : norman . at ,the grammar is complete and .the shortest path of is defined as the function , where the must be subtracted from in order to not include source vertex as a step and then must be divided by so as to avoid the inclusion of the edge label and directionality of the edge in the path length calculation . in the example presented , the shortest researcher - constrained friendship " path is . from ,it is possible to generate all other geodesic functions as defined in section [ sec : geos ] . in the presented example , the source vertex is lanl : johan and the sink vertex is lanl : norman .it is noted that the rwr : entrycontext and rwr : exitcontext of can be reconfigured to support new and source and sink vertices .in other words , can be configured to support different / path calculations .this section presents another example of the grammar - based geodesic algorithm . in this example, the grammar presented is equivalent to removing the edge labels and directionality from the semantic network and calculating a traditional geodesic metric on it .figure [ fig : path2-example ] presents the grammar where , in rdfs , rdfs : resource is the base type of all resources ( vertices and edge labels ) .thus , all rwr : contexts and rwr : edges can legally resolve to any vertex and edge label , respectively . an unconstrained grammar to determine all non - recurrent paths from lanl : jbollen to lanl : norman.,scaledwidth=70.0% ] the grammar in figure [ fig : path2-example ] will determine the set of all non - recurrent paths between lanl : johan and lanl : norman such that any edge type can be traversed to any vertex type .the central rwr : context is the rdfs : resource_1 context .a walker will loop over rwr : resource_1 until it can find an edge to make the final traversal to lanl : norman . note the use of both rwr : outedges ( + ) and rwr : inedges ( - ) .with both edges accessible , the walker can walk in any direction on the network .thus , this grammar is equivalent to executing a geodesic on an undirected and unlabeled version of the semantic network .finally , the grammar will produce no recurrent paths because of the rwr : notever_1 rule . given this and the original social network instance diagrammed in figure [ fig : social - instance ] , the shortest path between lanl : johan and lanl : norman is : johan , lanl : contacted , , lanl : norman with a path length of .to contrast , in the first example when the walker s path was constrained to researcher friendship relationships , the shortest path between lanl : johan and lanl : norman was .the semantic network is an unweighted network .thus , determining the shortest path between any two vertices is best solved by a breadth - first algorithm .the grammar - based walker , through cloning , is analogous to a breadth - first search through the network .however , not all edges are considered by the walker and thus , the running time of the algorithm is less than or equal to .the determination of the running time of the algorithm is grammar dependent . in order to calculate the running time of a particular grammar , it is important to calculate the number of vertices and edges of the grammar - specified types in . in the worst case situation ,the walker population will have traversed all vertices and edges from the source to ultimately locate the sink . however , because the network is unweighted , once the sink has been found by a single , the shortest path has been determined so the algorithm is complete .once a computation has been performed , its results can be reused as a sub - solution to a larger problem . as stated previously ,the path calculations between two vertices in a network are the kernel calculations for more complex path metrics such as shortest path , eccentricity , radius , diameter , closeness centrality , and betweenness centrality .this section will demonstrate how to encode the data structure into a semantic network such that the results of these calculations can be reused for each of the higher - order metrics .for instance , suppose the function , where .furthermore , suppose that there exist the resources `` 1'' : int and `` 2'' : int such that there also exists the triple the triple states , in human language , that the number is related to the number by the functional relationship . if that triple is in , then never again would it be necessary to compute because the result has already been computed and has been represented in .thus , can be queried for the result of the computation . for example , would return the result of .however , this is a trivial example because it is faster to compute on the local hardware processor then it is to query for the solution . in other situations ,this is not necessarily the case . for more complex computations , such as the set of paths between two vertices in according to some ,it is possible to represent and its associated data structure as a semantic network .figure [ fig : georw ] is a diagram of the rdfs ontology representing and , where the noted components are considered either named graphs , separate semantic network instances , or reified sub - networks . from instances of this ontology , it is possible to reuse the path calculations to determine various geodesics without recalculating the -correct paths between any two vertices and .encoding and its associated data structure in a semantic network.,scaledwidth=60.0% ] for example , given the path calculated in section [ sec : first - example ] , the semantic network representation would be represented as diagrammed in figure [ fig : georw - example ] . the number of rwr : segments is the largest rdfs : containermembershipproperty ( i.e. rdf:_3 ) for the rwr : path .the path length of is thus , ( i.e. ) . to make the mapping to the convention used in section [ sec : first - example ] more salient , note the rwr : segment component labels at the bottom of the diagram .an instance of the rdfs ontology in figure [ fig : georw].,scaledwidth=95.0% ] if the grammar - based path algorithm halts when it reaches an rwr : exitcontext , then every instance is a shortest path .while only the shortest path between two vertices is required for geodesic metrics , the next subsections present the generalized algorithm for searching all paths between source vertex and sink vertex . to compute the shortest path between two vertices and , where the complete set is searched , the grammar - based shortest path algorithm is represented as where and the function returns the smallest value of the second component of its domain minus the rdf : _ head .for example , if , then .the first rwr : path element is used later when calculating the betweenness centrality of a vertex .the query simply returns the path identifier and the number of segments of each path between the rwr : entrycontext and the rwr : exitcontext .more specifically , the query that generates can be understood , in human language , as saying : given the set of all rwr : geodesicwalkers ( ) that use as their grammar and who have a -path ( ) that has as the vertex of the first ( i.e. rdf:_1 ) rwr : segment ( ) , where is the rwr : entrycontext vertex of ( ) and who have in a -path rwr : segment ( ) , where is the rwr : exitcontext vertex of ( ) , return the rwr : path ( ) and the rwr : segment count ( ) of the rwr : segment . "given the shortest path query , it is possible to generate other grammar - based geodesics .for instance , for eccentricity , for radius , finally , for diameter , for closeness centrality , finally , for betweenness centrality , if , where returns the set of shortest paths in its domain and represents the set of shortest paths from to such that there exists some rwr : segment in the rwr : path that has as its vertex , then to calculate the betweenness centrality of vertex , it is important to know the number of shortest paths that go from to as well as the number of shortest paths that go from to through .the function is used to determine which of those elements in are shortest paths .the set is then the set of all paths between and that go through and are elements of .this article has presented a technique to port some of the most fundamental geodesic network analysis algorithms into the semantic network domain . therecurrently exist many technologies to support large - scale semantic network models represented according to rdf .high - end , modern - day triple - stores support on the order of triples .while many centrality algorithms are costly on large networks , by restricting the search to meaningful subsets of the full semantic network , as defined by a grammar , geodesic metrics can be reasonably executed on even the most immense and complex of data sets .marko a. rodriguez is funded by the mesur project ( http://www.mesur.org ) which is supported by a grant from the andrew w. mellon foundation .marko is also funded by a director s fellowship granted by the los alamos national laboratory .m. a. rodriguez , social decision making with multi - relational networks and grammar - based particle swarms , in : proceedings of the hawaii international conference on systems science , ieee computer society , waikoloa , hawaii , 2007 , pp .http://dx.doi.org/10.1109/hicss.2007.487 [ ] .k. anyanwu , a. sheth , -queries : enabling querying for semantic associations on the semantic web , in : proceedings of the twelfth international world - wide web conference , acm , new york , ny , 2003 , pp .http://dx.doi.org/10.1145/775152.775249 [ ] .s. lin , interesting instance discovery in multi - relational data , in : d. l. mcguinness , g. ferguson ( eds . ) , proceedings of the conference on innovative applications of artificial intelligence , mit press , 2004 , pp . 991992 .b. aleman - meza , c. halaschek - wiener , i. b. arpinar , c. ramakrishnan , a. p. sheth , ranking complex relationships on the semantic web , ieee internet computing 9 ( 3 ) ( 2005 ) 3744 .http://dx.doi.org/10.1109/mic.2005.63 [ ] .a. p. sheth , i. b. arpinar , c. halaschek , c. ramakrishnan , c. bertram , y. warke , d. avant , f. s. arpinar , k. anyanwu , k. kochut , semantic association identification and knowledge discovery for national security applications , journal of database management 16 ( 1 ) ( 2005 ) 3353 .m. a. rodriguez , j. shinavier , exposing multi - relational networks to single - relational network analysis algorithms , journal of informetrics 4 ( 1 ) ( 2009 ) 2941 .http://dx.doi.org/10.1016/j.joi.2009.06.004 [ ] . y. blanco - fernndez , j. j. pazos - arias , a. gil - solla , m. ramos - cabrer , m. lpez - nores , j. garca - duque , a. fernndez - vilas , r. p. daz - redondo , j. bermejo - mu noz , a flexible semantic inference methodology to reason about user preferences in knowledge - based recommender systems , knowledge - based systems 21 ( 4 ) ( 2008 ) 305320 .http://dx.doi.org/10.1016/j.knosys.2007.07.004 [ ] .k. p. chitrapura , s. r. kashyap , node ranking in labeled directed graphs , in : proceedings of the conference on information and knowledge management ( cikm04 ) , acm , new york , ny , 2004 , pp . 597606 .http://dx.doi.org/10.1145/1031171.1031281 [ ] .t. berners - lee , r. t. fielding , d. software , l. masinter , a. systems , http://www.ietf.org/rfc/rfc2396.txt[uniform resource identifier ( uri ) : generic syntax ] ( january 2005 ) .http://www.ietf.org/rfc/rfc2396.txt c. weiss , p. karras , a. bernstein , hexastore : sextuple indexing for semantic web data management , proceedings of the very large database endowment 1 ( 1 ) ( 2008 ) 10081019 .http://dx.doi.org/10.1145/1453856.1453965 [ ] .j. bollen , m. a. rodriguez , h. van de sompel , l. l. balakireva , a. hagberg , the largest scholarly semantic network ... ever . , in : proceedings of the world wide web conference , acm press , new york , ny , 2007 , pp .http://dx.doi.org/10.1145/1242572.1242789 [ ] .j. bollen , h. van de sompel , m. a. rodriguez , towards usage - based impact metrics : first results from the mesur project . , in : proceedings of the joint conference on digital libraries , ieee / acm , new york , ny , 2008 , pp .http://dx.doi.org/10.1145/1378889.1378928 [ ] .
|
a geodesic is the shortest path between two vertices in a connected network . the geodesic is the kernel of various network metrics including radius , diameter , eccentricity , closeness , and betweenness . these metrics are the foundation of much network research and thus , have been studied extensively in the domain of single - relational networks ( both in their directed and undirected forms ) . however , geodesics for single - relational networks do not translate directly to multi - relational , or semantic networks , where vertices are connected to one another by any number of edge labels . here , a more sophisticated method for calculating a geodesic is necessary . this article presents a technique for calculating geodesics in semantic networks with a focus on semantic networks represented according to the resource description framework ( rdf ) . in this framework , a discrete walker " utilizes an abstract path description called a grammar to determine which paths to include in its geodesic calculation . the grammar - based model forms a general framework for studying geodesic metrics in semantic networks .
|
in this paper , we develop a new framework for * compressed sensing * ( sparse signal recovery ) .we focus on nonnegative sparse signals , i.e. , and .note that real - world signals are often nonnegative .we consider the scenario in which neither the magnitudes nor the locations of the nonzero entries of are unknown ( e.g. , data streams ) .the task of compressed sensing is to recover the locations and magnitudes of the nonzero entries .our framework differs from mainstream work in that we use maximally - skewed -stable distributions for generating our design matrix , while classical compressed sensing algorithms typically adopt gaussian or gaussian - like distributions ( e.g. , distributions with finite variances ) .the use of skewed stable random projections was originally developed in , named * compressed counting ( cc ) * , in the context of data stream computations .note that in this paper we focus on dense design matrix and leave the potential use of `` very sparse stable random projections '' for sparse recovery as future work , which will connect this line of work with the well - known `` sparse matrix '' algorithm .+ in compressed sensing , the standard procedure first collects non - adaptive linear measurements and then reconstructs the signal from the measurements , , and the design matrix , . in this context , the design matrix is indeed `` designed '' in that one can manually generate the entries to facilitate signal recovery .in fact , the design matrix can be integrated in the sensing hardware ( e.g. , cameras , scanners , or other sensors ) . in classical settings , entries of the design matrix , ,are typically sampled from gaussian or gaussian - like distributions .the recovery algorithms are often based on linear programming ( _ basis pursuit _ ) or greedy pursuit algorithms such as _ orthogonal matching pursuit _ . in general, lp is computationally expensive .omp might be faster although it still requires scanning the coordinates times .+ it would be desirable to develop a new framework for sparse recovery which is much faster than linear programming decoding ( and other algorithms ) without requiring more measurements. it would be also desirable if the method is robust against measurement noises and is applicable to * data streams*. in this paper , our method meets these requirements by sampling from maximally - skewed -stable distributions . in our proposal , we sample entries of the design matrix from an -stable maximally - skewed distribution , denoted by , where the first `` 1 '' denotes maximal skewness and the second `` 1 '' denotes unit scale . if a random variable , then its characteristic function is suppose i.i.d .for any constants , we have . more generally , if i.i.d .there is a standard procedure to sample from .we first generate an exponential random variable with mean 1 , , and a uniform random variable , and then compute ^{\frac{1}{\alpha } } } \left[\frac{\sin\left ( u - \alpha u\right)}{w } \right]^{\frac{1-\alpha}{\alpha } } \sim s(\alpha,1,1)\end{aligned}\ ] ] in practice , we can replace the stable distribution with a heavy - tailed distribution in the domain of attractions , for example , ^{1/\alpha}} ] and leave the study for in future work . at the decoding stage, we estimate the signal coordinate - wise : the number of measurements is chosen so that ( e.g. , ) . +* main result * : when ] , , we have * proof : * see appendix [ app_lem_f ] .figure [ fig_f ] plots for selected values . + for ] .later we will show that our method only requires very small . *the constant can be numerically evaluated as shown in figure [ fig_calpha ] .* when , we have . hence . * when , we have . hence . as in lemma [ lem_f_order ] .numerically , it varies between 1 and .,width=336 ] to conclude this section , the next lemma shows that the maximum likelihood estimator using the ratio statistics is actually the `` minimum estimator '' .[ lem_ratio_mle ] use the ratio statistics , , to .when ] and , the ( sharp ) bound can be written as , where the constant is the same constant in lemma [ lem_f_order ] . when and , a precise bound exists : * proof : * the result ( [ eqn_mf ] ) follows from lemma [ lem_err ] , ( [ eqn_m0 ] ) from lemma [ lem_f ] , ( [ eqn_mc ] ) from lemma [ lem_f_order ] .we provide more details for the proof of the more precise bound ( [ eqn_m05 ] ) .when , }\end{aligned}\ ] ] which can be simplified to be , using the fact that ] and our theoretical results show that smaller values lead to better performance .the natural question is : why can we simply use a very small ?there are numerical issues which prevents us from using a too small . for convenience ,consider the approximate mechanism for generating by using , where ( based on the theory of domain of attractions and generalized central limit theorem ) . if , then we have to compute , which may potentially create numerical problems . in our matlab simulations , we use ] , we first note that the equality holds when and . to see the latter case , we write , where and are i.i.d . when , by symmetry .it remains to show is monotonically increasing in for fixed ] . we consider three terms ( in curly brackets ) separately and show they are all when ] , when ] at least for ] .this completes the proof .the goal is to show that . by our definition , ^{\alpha/(1-\alpha)}\left[\frac{1}{\sin u } \right]^{\frac{1}{1-\alpha } } { \sin\left ( u - \alpha u\right)}\end{aligned}\ ] ] we can write the integral as where ^{\alpha/(1-\alpha)}\left[\frac{1}{\sin(\pi- u ) } \right]^{\frac{1}{1-\alpha } } { \sin\left ( \pi - u - \alpha ( \pi - u)\right)}\\\notag = & \left[{\sin\left(\alpha ( \pi- u)\right)}\right]^{\alpha/(1-\alpha)}\left[\frac{1}{\sin u } \right]^{\frac{1}{1-\alpha } } { \sin\left ( u + \alpha ( \pi - u)\right)}\end{aligned}\ ] ] first , using the fact that , we obtain ^{\alpha/(1-\alpha)}\left[\frac{1}{\sin u } \right]^{\frac{1}{1-\alpha } } { ( 1-\alpha)\sin\left ( u\right ) } = \alpha^{\alpha/(1-\alpha)}(1-\alpha)\end{aligned}\ ] ] we have proved in the proof of lemma [ lem_f ] that is a monotonically increasing function of ] , we have ^{\alpha/(1-\alpha ) } { \cos\left ( \alpha \pi/2\right)}\leq 1,\hspace{0.2 in } u\in [ 0,\ \pi/2]\end{aligned}\ ] ] in other words , we can view as a constant ( i.e. , ) when ] , we have and .thus , dominates .therefore , the order of is determined by one term : since we have , for ] , we have combining the results , we obtain this completes the proof .define and . to find the mle of , we need to maximize . using the result in lemma [ lem_f ] , for , we have where is defined in lemma [ lem_f ] and if .this means , when and is nondecreasing in if .therefore , given observations , , the mle is the sample minimum .this completes the proof .^mdt\\\notag = & x_i + \theta_i\int_{0}^\infty \left[1-f_\alpha\left(\left(t\right)^{\alpha/(1-\alpha)}\right)\right]^mdt\\\notag = & x_i+\theta_id_{m,\alpha}\end{aligned}\ ] ] we have proved in lemma [ lem_f ] that thus , ^mdt\\\notag \leq & \int_{0}^\infty \left[\frac{1}{1+\left(t\right)^{\alpha/(1-\alpha)}}\right]^mdt\\\notag = & \frac{1-\alpha}{\alpha}\int_{0}^1 t^m\left(1/t-1\right)^{(1-\alpha)/\alpha-1}\frac{1}{t^2}dt\\\notag = & \frac{1-\alpha}{\alpha}\int_{0}^1 t^{m-(1-\alpha)/\alpha-1}\left(1-t\right)^{(1-\alpha)/\alpha-1}dt\\\notag = & \frac{1-\alpha}{\alpha}beta\left(m-(1-\alpha)/\alpha,\ ( 1-\alpha)/\alpha\right)\end{aligned}\ ] ] when , then , and ^mdt = \int_{0}^\infty \left[1-\frac{2}{\pi}\tan^{-1}t\right]^mdt\\\notag = & \int_0^{\pi/2}\left[1-\frac{2u}{\pi}\right]^m d \tan^2{u } = \int_0^{\pi/2}\left[1-\frac{2u}{\pi}\right]^m d \frac{1}{\cos^2u } \\\notag = & \int_1^{0}u^m d \frac{1}{\sin^2\left(u\pi/2\right ) } = m\int_0 ^ 1 \frac{u^{m-1}}{\sin^2\left(u\pi/2\right ) } du -1\\\notag = & m\left(\frac{2}{\pi}\right)^{m}\int_0^{\pi/2 } \frac{u^{m-1}}{\sin^2u}du -1\end{aligned}\ ] ] from the integral table , we have therefore , to facilitate numerical calculations , we resort to ( let ) where is the bernoulli number satisfying and , , , , , , , ...
|
compressed sensing ( sparse signal recovery ) has been a popular and important research topic in recent years . by observing that natural signals are often nonnegative , we propose a new framework for nonnegative signal recovery using _ compressed counting ( cc)_. cc is a technique built on _ maximally - skewed -stable random projections _ originally developed for data stream computations . our recovery procedure is computationally very efficient in that it requires only one linear scan of the coordinates . + in our settings , the signal is assumed to be nonnegative , i.e. , . our analysis demonstrates that , when $ ] , it suffices to use measurements so that , with probability , all coordinates will be recovered within additive precision , in one scan of the coordinates . the constant when and when . in particular , when , the required number of measurements is essentially , where is the number of nonzero coordinates of the signal .
|
the beowulf project began in 1994 at the nasa goddard space flight center .more about the history of the project and the current status can be found at the web site http://www.beowulf.org .there are about 100 clusters listed on the beowulf home page ( which is clearly not a complete list ) . within the milc collaboration, we have access to at least five clusters at our universities .we also have done production work on six other clusters at national supercomputer centers .there are several advantageous characteristics often cited for clusters .chief among these is the use of commodity hardware to produce a very cost - effective computer .processors such as the intel pentia and celeron and the amd athlon or k6 , fastethernet network cards and switches have been used to build quite cost - effective machines . however ,other choices such as the compaq alpha processor and higher speed networks such as myrinet , giganet and quadrics qsnet have also been used to build very powerful clusters .another characteristic of clusters is the use of commodity software such as linux , gnu , mpich and pbs to keep software costs close to zero .a third advantage of the cluster approach is their programmability and flexibility .message passing interface or mpi , has become a standard in commercial parallel computers .the milc code had been compiled under mpi well before being run on a beowulf cluster .the port to beowulf required minimal effort .all of our pc cluster benchmarks have been done without any assembly code .practical calculations can be done on current clusters with a granularity that is well suited to fft routines .clusters have a community of users and developers .new system administration tools frequently become available , as do advances in parallel file systems , schedulers and other useful software .thus , one can take advantage of the vigor of the community and avoid spending a large amount of time developing software unrelated to the physics . finally ,because of the short design time of clusters , one can take quick advantage of the many developments in pc hardware .it is not necessary to lock oneself into a technology either well in advance of it s actually being available , or a well developed technology that will be outdated by the time a large system can be constructed and commissioned .one can often avoid the problem of having a single source for key items .if you can no longer get a particular motherboard , there will be another vendor with a similar ( or superior ) offering .there are also potential disadvantages of clusters . with a standard supercomputer, one can get a maintenance contract , and there is somebody to yell at when things go wrong .( however , as a long - time user of supercomputers , i know that having a vendor does nt assure that the problem will be fixed . )recently , a number of vendors have been selling clusters .so the problem of not having anyone to yell at may be avoided .of course , there is still no assurance that yelling ( or even asking politely ) will result in the problem being solved .another disadvantage of the cluster approach is that of having to rely on the design effort of others . if vendors are not producing hardware with the specifications that you need , you may not be able to build a well optimized system . on the other hand , most physicists are neither skilled at nor interested in vlsi design or pcb layout and would rather spend their time thinking about physics , so why not take advantage of the labor of computer engineers ?the indiana university physics department received 693 for a pentium ii 350 , with a 4.3 gb hard drive and 64 mb of ecc ram .each node has a floppy drive and a fastethernet card .however , the compute nodes have no keyboard , video card or cdrom .there are a few video cards and an extra keyboard that can be used if a node does not reboot on its own .the 40-port hp procurve switch cost about 25,000 . currently , ( october , 2000 ) it would be possible to build this system for 11,500 .an even more attractive alternative would be a diskless athlon 600 mhz system for which the per node cost is about 10,000 .this works out to a cost / mf of between 7.80 . in sec . 2, we describe the key issues for good performance and in sec . 3 , we present benchmarks for the milc code on various supercomputers and clusters .section 4 gives rough cost - performance ratios for a number of platforms .additional information about emerging technologies for clusters and user experiences can be found in the on - line presentations from a session that i organized at the march , 2000 aps meeting .a very simple approach to achieving good performance for domain decomposition codes like lattice qcd codes is to optimize single node performance and to try to avoid degrading performance too much when one has to communicate boundary values to neighboring nodes .the single node performance is likely to depend upon such issues as the quality of the cpu , the performance and size of cache(s ) , the bandwidth to main memory and the quality of the compiler . for message passing performance ,key issues are the latency , peak bandwidth , processor overhead and the message passing software .focusing first on single node performance , we note that it is easy to waste a lot of money on a poor system design . to illustrate this, we consider the various speed amd athlon processors available and their prices on a particular day .although we focus on athlon here , the same considerations apply to intel or other processors .figure [ fig : price_vs_speed ] shows that processor price is a rapidly increasing function of speed . in fig[ fig : value ] , we divide the price by the speed of the chip and see that the relative expense rises rapidly for the faster chips . at the time this graph was produced , there was an apparent sweet spot at 600 mhz .the faster chips have a higher price - performance ratio . depending upon the costs of the other components of the system, the entire system may have a higher or lower price - performance ratio .for our qcd codes , access to memory is quite important . with the benchmarks below wedemonstrate that performance does not increase in proportion to the speed of the chip .this is because memory speed is fixed by the 100 mhz front side bus for both 500 mhz and 600 mhz athlons .this work was supported by the u.s .doe under grant de - fg02 - 91er 40661 .special thanks to the milc collaboration , and especially r. sugar for reading the manuscript .we thank the albuquerque high performance computer center , indiana university , llnl , national center for supercomputing applications , pittsburgh supercomputer center and san diego supercomputer center .
|
since the development of the beowulf project to build a parallel computer from commodity pc components , there have been many such clusters built . the milc qcd code has been run on a variety of clusters and supercomputers . key design features are identified , and the cost effectiveness of clusters and supercomputers are compared .
|
a wireless acoustic sensor network ( wasn ) is a set of wireless microphones equipped with some communication , signal processing , and possibly memory units , which are randomly scattered in the environment .the communication unit allows a sensor node to communicate with a base station and also with other nodes , and the signal processing unit enables a node to perform local processing .the random structure of the network removes the array - size limitations imposed by classical microphone arrays , thereby providing better performance in sense of high - snr spatio - temporal sampling of the sound field and spatial diversity .the interested reader is referred to for a review of wasns .due to strict power and bandwidth constraints on wireless microphones , it might not be possible for nodes which are far from the base station to directly deliver their messages . a multi - hop scenario where the message goes along adjacent nodes until reachingthe base - station is a commonplace alternative .we assume that a node is supposed to receive messages from multiple neighboring nodes and then combine them with its own measurement of the sound field into a new message to be forwarded through the network .this is illustrated in fig.[net ] . in , this problem was solved for scalar gaussian sources in a non - distributed setting i.e. , without making use of the availability of the nodes measurements as side information .it was shown that coding at intermediate nodes can result in significant gains in terms of sum - rate or distortion . in this paper, we consider vector gaussian sources and also take into account the fact that messages received by a node are correlated with the node s measurement of the sound field .thus , we consider a distributed scenario , and make use of the destination node s measurement as side information to decrease the required rate for transmission to each node , , .we derive the rate - distortion ( rd ) function for an arbitrary node in the network with a distortion constraint defined in form of a covariance matrix , cf . , . in sectionii , we introduce the notation and formulate the problem , which turns out to involve joint coding of multiple sources . in section iii ,we derive a conditional sufficient statistic for multiple gaussian sources and show that for the above - mentioned distributed source coding ( dsc ) problem , one can encode a conditional sufficient statistic instead of joint encoding of multiple sources .the rd function for the resulting problem will be derived in section iv . the paper is concluded in section v.we denote by and $ ] the information theoretic operations of mutual information , differential entropy , and expectation , respectively .probability density functions are denoted by and covariance and cross - covariance matrices are denoted by .we assume that all covariance matrices are of full rank .we denote markov chains by two - headed arrows ; e.g. .we assume that the source generates independent gaussian vectors of length denoted by . while the vectors are independent , the components of each vector are correlated .node makes a noisy measurement of the source given by : where is the additive gaussian noise at node and the matrix models the mixing effect of the acoustic channel .the noise is assumed to be independent of .node also receives messages from nodes denoted by , from which it can make estimations of the source by decoding the messages with as the decoder side information .the problem is to find the minimum rate to jointly encode into a message to be sent to node , while satisfying the given distortion constraint and considering as side information .this is illustrated in figs.[net ] and [ 1node ] .note that one could further decrease the network sum - rate by taking into account the correlation between the messages which are sent to a common node .however , we leave out this possibility in this paper . throughout this work , we assume that the acoustic mixing matrices are fixed and known .we also assume that joint statistics of the source and the noise variances are available. finally , although the model in ( [ measure ] ) is appropriate for acoustic networks , we do not consider real acoustic signals in this work .instead , we consider gaussian sources for simplicity of mathematical analysis .the case of real audio measurements is the focus of our future work .assume that is a random vector or a collection of random vectors with probability density function .[ define ] a sufficient statistic for the estimation of from is a function of for which is not a function of . in other words , .[ suff ] is a sufficient statistic of for estimating if and only if can be factorized as : where ( depending on only ) and ( depending on and but not on directly ) are nonnegative functions . of the network ] the factorization theorem enables us to find a sufficient statistic for e.g. random vectors in gaussian noise as shown in the following lemma .[ gauss ] if are measurements of a random vector in mutually independent gaussian noises as in ( [ measure ] ) , then is a sufficient statistic of for estimating , where and is the covariance matrix of the noise .we prove the lemma for .the proof for the general case is similar .since and are independent , the joint conditional density function of and can be written as : where is independent of , and , and is defined as : expanding , rearranging the terms , and substituting in ( [ joint ] ) , leads to which shows that is a sufficient statistic according to the factorization theorem . to make the notion of sufficient statistics applicable to our dsc problem, we need to introduce a conditional version : [ define1 ] a conditional sufficient statistic for the estimation of from given is a function of for which is not a function of .[ gauss1 ] if , then any sufficient statistic of for estimating is also a conditional sufficient statistic of for estimating given . which is independent of .[ lem3 ] given side information at the decoder , there is no loss in terms of rate and distortion by encoding a conditional sufficient statistic of given instead of joint encoding of these sources .the proof follows along the lines of the proof for the unconditional case presented in , and is therefore omitted . combining the results of this section, we have the following theorem for the problem formulated in section ii : [ th1 ] given side information at the decoder , the rd function for the problem of joint encoding of multiple sources coincides with the rd function for the dsc problem of encoding the single source : where is the distortion matrix for node . using the backward channel model we have where the covariance matrix of is .this means that can be written as : where substituting ( [ xhat1 ] ) and ( [ xhat2 ] ) in ( [ xhat ] ) and using ( [ gauss_suff ] ) yields ( [ t ] ) .since is a sufficient statistic of and , it follows from lemma [ gauss1 ] that it is also a conditional sufficient statistic given . from lemma [ lem3 ] ,one can then replace by in ( [ t ] ) and get the same rd function . at this point, we have shown that the above problem with multiple sources can be converted into a single source dsc problem for gaussian sources with covariance distortion constraint .this problem is illustrated in fig.[prob ] .for the case of mean - squared error distortion constraint , the rd function was found in , while the case of covariance distortion was not treated in that work . in the next section ,we derive the rd function for the covariance distortion constraint under some mild technical assumptions .for the sake of simplicity of derivations , we write and in terms of their linear estimations based on the known gaussian vectors in fig.[prob ] . in particular , we have that where are estimation errors with covariance matrices , respectively , and and depend only on the covariance and cross - covariance matrices of and .( see appendix for the mathematical statement for as an example . )we will show that if the mixing matrices in ( [ measure ] ) are invertible , then for the rd function for node is given by : first we need the following lemma : [ inv ] if the mixing matrices , are invertible , the matrix in ( [ xty ] ) is also invertible .see the appendix . from the first equality in ( [ xty ] ) we have : from the second equaltiy in ( [ xty ] ) we can write : assume now that the sequence of independent vectors generated by the source is encoded in block vectors each containing vectors . from the distortion constraint and ( [ new ] ) we can write : } = { \bf{c}}{{\bf{\sigma } } _ { { { \bf{\upsilon } } _ 1}}}{{\bf{c}}^t } + { { \bf{\sigma } } _ { { { \bf{\upsilon } } _ 2}}}\ ] ] or equivalently : denoting the blocks of vectors by , respectively , we have : where the block vector contains vectors , ( [ lb5 ] ) is because conditioning reduces the entropy , ( [ lb3 ] ) is because is invertible , and ( [ lb4 ] ) is the result of applying ( [ lb1 ] ) and ( [ lb2 ] ) to ( [ lb3 ] ) .the lower bound derived in the previous part can be used as a guideline for the best possible performance for any coding scheme .the question is then , given the source , how to encode it into a discrete source , so that for the given distortion constraint , the required rate for transmission of achieves the lower bound ?let us assume that is quantized to , in a way that satisfies the distortion constraint . from the results of dsc , due to the availability of the side information at the decoder , it is possible to noiselessly encode in blocks of sufficiently large length with a rate arbitrarily close to , , .the remaining task is to design a scheme for which achieves ( [ rd ] ) .this is possible by using the following scheme : where we have denoted the eigenvalue decomposition of by , and the covariance of the coding noise is defined as : to verify this , one can write as , substitute ( [ scheme])([scheme1 ] ) , and utilize ( [ ty ] ) and ( [ lb1 ] ) .it is worth noting that the rd function ( [ rd ] ) generalizes the rd function of , which treated the scalar case .we showed that the rate - distortion function for a distributed source coding problem with multiple sources at the encoder is identical to the rate - distortion function for the distributed encoding of a so - called conditional sufficient statistic of the sources .we derived a conditional sufficient statistic for the case that additive noises are gaussian and mutually independent , and calculated the rate - distortion function in case that the sources are vector gaussian and the distortion constraint is defined as a covariance matrix . sincevector sources were considered in order to take the memory into account , and the covariance constraint on the distortion is a more flexible fidelity criterion compared to mean - squared error , these results can be applied to the problem of source coding for audio signals in presence of reverberation , which will be the scope of our future work .where the covariance matrix of the estimation error is , and and are related to via and , and and are invertible .therefore we have : { { \bf{\sigma } } _ { yz}}{{\bf{\delta } } ^ { - 1}}{\bf{\sigma } } _ { yz}^t \nonumber \\ \label{ap3 } & & = { { \bf{\sigma } } _ x}{{\bf{a}}^t}\left ( { { \bf{i } } + { { \bf{h}}^ { - 1}}{{\bf{\sigma } } _ { yz}}{{\bf{\delta } } ^ { - 1}}{\bf{\sigma } } _ { yz}^t } \right),\end{aligned}\ ] ] {{\bf{b}}^t } + { { \bf{\sigma } } _ { { n_2 } } } \nonumber \\ & & = { \bf{b}}{{\bf{\sigma } } _ x}{{\bf{a}}^t}\left [ { { { \bf{a}}^ { - t}}{\bf{\sigma } } _ x^ { - 1}{{\bf{a}}^ { - 1 } } - { { \left ( { { \bf{a}}{{\bf{\sigma } } _ x}{{\bf{a}}^t } + { { \bf{\sigma } } _ { { n_1 } } } } \right)}^ { - 1 } } } \right ] { \bf{a}}{{\bf{\sigma } } _ x}{{\bf{b}}^t } + { { \bf{\sigma } } _ { { n_2 } } } \nonumber \\ \label{ap4 } & & = { \bf{\sigma } } _ { yz}^t\left ( { - { { \bf{h}}^ { - 1 } } } \right){{\bf{\sigma } } _ { yz } } + { { \bf{\sigma } } _ { { n_2}}}.\end{aligned}\ ] ] c. tian and j. chen , _ remote vector gaussian source coding with decoder side information under mutual information and distortion constraints _ , ieee transactions on information theory , vol .10 , pp.4676 - 4680 , oct . 2009 .s. c. draper and g. w. wornell , _ side information aware coding strategies for estimation under communication constraints _ , ieee journal on selected areas in communications , vol .22 , no . 6 , pp . 1 - 11 , aug .
|
in this paper , we consider the problem of remote vector gaussian source coding for a wireless acoustic sensor network . each node receives messages from multiple nodes in the network and decodes these messages using its own measurement of the sound field as side information . the node s measurement and the estimates of the source resulting from decoding the received messages are then jointly encoded and transmitted to a neighboring node in the network . we show that for this distributed source coding scenario , one can encode a so - called conditional sufficient statistic of the sources instead of jointly encoding multiple sources . we focus on the case where node measurements are in form of noisy linearly mixed combinations of the sources and the acoustic channel mixing matrices are invertible . for this problem , we derive the rate - distortion function for vector gaussian sources and under covariance distortion constraints .
|
modern humans presumably originated about 200,000 years ago in east africa .they emigrated to 150 km north of gaza about 100,000 years ago ( skhul near haifa ) but died out there again .then about 50,000 years ago they again emigrated from africa , this time more successfully , but presumably again via gaza and haifa .+ according to [ 1 ] , demographic factors can explain geographic variation in the timing of the first appearance of modern behaviour without invoking increased cognitive capacity .+ now we try to reproduce this demographic effect by applying standard percolation theory [ 2 ] . only for densitiesp above the percolation threshold on infinite square lattices ) can information travel from one side of the lattice , via random jumps to occupied nearest - neighbour sites , up to the opposite side of the lattice .this type of random walk on random square lattices was discussed a lot about three decades ago , but mostly for large systems and long times close to the percolation threshold .we will show that for times below 10,000 jump attempts even for quite small lattices , one needs occupation probabilities far above the percolation threshold ( ) in order to transfer information from one side of the lattice to the opposite site .first we have occupy a square lattice of size randomly with probabilities for occupied and for empty , clusters are groups of occupied neighbouring sites .then we put diffusing particles randomly onto occupied sites. then we let each particle at each time step try to move into one of the four possible directions ; if that neighbour is occupied the particle ( which may represent a teacher of new techniques ) moves there , if it is empty the particle stays at its old place .+ in this way , particles diffuse on a randomly occupied lattice , using only the occupied lattice sites . the occupied sites can be settlements of humans , and the diffusing particles can be visitors spreading information on new techniques .+ we start our simulations with for different . with , with different probability .we check after which time , i.e. after how many jump attempts , the particle diffuses across the lattice in x - direction . + and we have nine samples ( diffusing particles on the same lattice ) , from which the median is defined such that four times are larger and four times are smaller than the median time , shown in fig . 1 .then we complete our simulations for three probabilities and take the median time , with random seed number iseed=1 , so we get figure ( 2 ) : figure ( 3 ) change the random number as iseed=1,2,3 for one sample of probability = 0.70 , and then we take the average median for each iseed number to get figure ( 3 ) :if we identify one time unit with one day , times above 10,000 are unrealistic for a single messenger of new techniques .thus only rather large occupation probabilities closer to unity than to allow the random spread over dozens of distances between human bands .future simulations might facilitate information spread by allowing these bands to move randomly on the lattice ( annealed instead of quenched disorder ) , as was appropriate before the neolithicum with agriculture .the authors would like to thank prof .stauffer for many valuable suggestions , fruitful discussions and constructive advice during the development of this work .[ 1 ] adam powell , stephen shennan , and mark g. thomas , science 324 , 1298 ( 2009 ) .[ 2]dietrich stauffer and amnon aharony , introduction to percolation theory , taylor and francis , london 1994 ( 2nd printing of 2nd edition )
|
about 45,000 years ago , symbolic and technological complexity of human artefacts increased drastically . computer simulations of powell , shennan and thomas ( 2009 ) explained it through an increase of the population density , facilitating the spread of information about useful innovations . we simplify this demographic model and make it more similar to standard physics models . for this purpose , we assume that bands ( extended families ) of stone - age humans were distributed randomly on a square lattice such that each lattice site is randomly occupied with probability and empty with probability . information spreads randomly from an occupied site to one of its occupied neighbours . + if we wait long enough , information spreads from one side of the lattice to the opposite site if and only if is larger than the percolation threshold ; this process was called `` ant in the labyrinth '' by de gennes 1976 . we modify it by giving the diffusing information a finite lifetime , which shifts the threshold upwards
|
functional data analysis has grown into a comprehensive and useful field of statistics which provides a convenient framework to handle some high dimensional data structures , including curves and images .the monograph of has done a lot to introduce its ideas to the statistics community and beyond .several other monographs and thousands of papers followed .this paper focuses on a specific aspect of the mathematical foundations of functional data analysis , which is however of fairly central importance .we first describe the contribution of this paper in broad terms , and provide some more detailed background and discussion in the latter part of this section .perhaps the most important , and definitely the most commonly used , tool for dimension reduction of functional data is the principal component analysis .suppose we observe a sample of functions , , and denote by the scores of the with respect to the estimated functional principal components .the scores depend on two variables and , and to reflect the infinite dimensional nature of the data , it may be desirable to consider asymptotics in which both and increase . this paper establishes results that allow us to study the two dimensional partial sum process more specifically , we derive a uniform normal approximation and apply it to two problems related to testing the null hypothesis that all observed curves have the same mean function .we obtain new test statistics in which the number of the functional principal components , , increases slowly with the sample size .we hope that our general approach will be used to derive similar results in other settings .statistical procedures for functional data which use functional principal components ( fpc s ) often depend on the number of the components used to compute various statistics .the selection of an optimal has received a fair deal of attention .commonly used approaches include the cumulative variance method , the scree plot , and several forms of cross validation and pseudo information criteria . by now , most of these approaches are implemented in several r packages and in the matlab package pace . a related direction of research has focused on the identification of the dimension assuming that the functional data actually live in a finite dimensional space of this dimension , see and .the research presented in this paper is concerned with functional data which can not be reduced to finite dimensional data in an obvious and easy way .such data are typically characterized by a slow decay of the eigenvalues of the empirical covariance operator .figure [ fig:3b ] shows the eigenvalues of the empirical covariance operator of the annual temperature curves obtained over the period 18562011 in melbourne , australia , while figure [ fig:5 ] shows the cumulative variance plot for the same data set .it is seen that the eigenfunctions decay at a slow rate , and neither their visual inspection nor the analysis of cumulative variance provide a clear guidance on how to select .this data set is analyzed in greater detail in section [ s : stefan ] . in situationswhen the choice of is difficult , two approaches seem reasonable . in the first approach, one can apply a test using several values of in a reasonable range . if the conclusion does not depend on , we can be confident that it is correct .this approach has been used in applied research , see for a recent analysis of this type .the second approach , would be to let increase with the sample size , and derive a test statistic based on the limit . in a sense ,the second approach is a formalization of the first one because if a limit as exists , then the conclusions should not depend on the choice of , if it is reasonably large . in the fda communitythere is a well grounded intuition that should increase much slower than , so asymptotically large need not be very large in practice .it is also known that the rate at which increases should depend on the manner in which the eigenvalues decay .we obtain specific conditions that formalize this intuition in the framework we consider . in more specific settings , contributions in this directionswere made by and .the work of is more closely related to our research : as part of the justification of their testing procedure , they establish conditions under which a limiting chi square distribution with degrees of freedom can be approximated by a normal distribution as . are concerned with a test of the equality of the covariance operators in two samples of gaussian curves . in the supplemental material, they derive asymptotics in which is allowed to increase with the sample size .our theory is geared toward testing the equality of mean functions , but we do not assume the normality of the functional observations , so we can not use arguments that use the equivalence of independence and zero covariances .we develop a new technique based on the estimation of the prokhorov lvy distance between the underlying processes and the corresponding normal partial sums .the paper is organized as follows . in section [ s :approx ] , we set the framework and state a general normal approximation result in theorem [ approx ] .this result is then used in sections [ s : cp ] and [ s : two - s ] to derive , respectively , change point and two sample tests based on an increasing number of fpc s .section [ s : stefan ] contains a small simulation study and an application to the annual melbourne temperature curves .all proofs are collected in the appendices .we consider functional observations defined over a compact interval .we can and shall assume without loss of generality that ] . in the testing problems that motivate this research , under the null hypothesis ,the observations follow the model where and is the common mean .we impose the following standard assumptions .[ as-1 ] are independent and identically distributed .[ as-2 ] and under these assumptions , the covariance function is square integrable on the unit square and therefore it has the representation where are the eigenvalues and are the orthonormal eigenfunctions of the covariance operator , i.e. they satisfy the integral equation [ eq - eig ] _j v_j(t)=(t , s)v_j(s)ds .one of the most important dimension reduction techniques of functional data analysis is to project the observations onto the space spanned by , the eigenfunctions associated with the largest eigenvalues. since the covariance function , and therefore , are unknown , we use the empirical eigenfunctions and eigenvalues defined by [ eq - eigemp ] _ j _ j(t)=_n(t , s)_j(s)ds , where with in this section , we require only two more assumptions , namely [ as-5] [ as-6] assumption [ as-5 ] is needed to ensure that the fpc s are uniquely defined . in theorem [ approx ]it could , of course , be replaced by requiring only that the first eigenvalues are positive and different , but since in the applications we let , we just assume that all eigenvalues are positive and distinct . if for some , then the observations are in the linear span of , i.e. they are elements of a space , so in this case we can not consider .assumption [ as-5 ] means that the observations are in an infinite dimensional space .assumption [ as-6 ] is weaker than the usual assumption .as will be seen in the proofs , subtle arguments of the probability theory in banach spaces are needed to dispense with the fourth moment . to state the main result of this section , define where denotes the transpose of vectors and matrices .set [ n - sum ] s_j , n(x)=_i=1^nx_i , j , 0x 1 , 1j d. we now provide an approximation for the partial sum processes defined in ( [ n - sum ] ) with suitably constructed wiener processes ( standard brownian motions ) .[ approx ] if assumptions [ as-1 ] , [ as-5 ] and [ as-6 ] hold , then for every we can define independent wiener processes such that where only depends on and the constant in is not crucial , it is a result of our calculations .theorem [ approx ] is related to the results of einmahl ( , ) who obtained strong approximations for partial sums of independent and identically distributed random vectors with zero mean and with identity covariance matrix . in our setting , for any fixed , the covariance matrix is not the identity , but this is not the central difficulty .the main value of theorem [ approx ] stems from the fact that it shows how the rate of the approximation depends on ; no such information is contained in the work of einmahl ( , ) , who did not need to consider the dependence on .the explicit dependence of the right hand side of on is crucial in the applications presented in the following sections in which the dimension of the projection space depends on the sample size . very broadly speaking, theorem [ approx ] implies that in all reasonable statistics based on averaging the scores , even in those based on an increasing number of fpc s , the partial sums of scores can be replaced by wiener processes to obtain a limit distribution .the right hand side of allows us to derive assumptions on the eigenvalues required to obtain a specific result . replacing the unobservable scores by the sample scores is relatively easy .we will illustrate these ideas in sections [ s : cp ] and [ s : two - s ] .over the past four decades , the investigation of the asymptotic properties of partial sum processes has to a large extent been motivated by change point detection procedures , and this is the most natural application of theorem [ approx ] . the research on the change point problem in various contexts is very extensive , some aspects of the asymptotic theory are presented in .detection of a change in the mean function was studied by who considered a procedure in which the number of the fpc s , , was fixed , and the asymptotic distribution of the test statistic depended on .we show in this section that it is possible to derive tests with a standard normal limiting distribution by allowing the to depend on the sample size .we want to test whether the mean of the observations remained the same during the observation period , i.e. we test the null hypothesis ( `` = '' means equality in ) . under the null hypothesis , the follow model ( [ null - model ] ) in which is an unknown common mean function under .the alternative hypothesis is under the mean changes at an unknown time . to derive a new class of tests, we introduce the process ^ 2 -x(1-x)\right\},\;\;0\leq u , x \leq 1,\ ] ] where the process contains the cumulative sums which measure the deviation of the partial sums from their `` trend '' under , and a correction term needed to ensure convergence as . to obtain a limit which does not depend on any unknown quantities, we need to impose assumptions on the rate at which increases with . intuitively , the assumptions below state that is much smaller than the sample size , the largest eigenvalues are not too small , and that the difference between the consecutive eigenvalues tends to zero slowly . very broadly speaking , these assumptions mean that the distribution of the observations must sufficiently fill the whole infinite dimensional space .[ d-0] [ d-1] [ d-2] [ d-3] [ d-4 ] where , . with these preparations, we can state the main result of this section .[ th-1 ] if assumptions [ as-1][as-5 ] and [ d-0][d-4 ] are satisfied , then ^ 2,\ ] ] where is a mean zero gaussian process with =2ux^2(1-y)^2 , \ \\ 0\leq u\leq v \leq 1 , \ \0\leq x\leq y \leq 1.\ ] ] one can verify by computing the covariance functions that where is a bivariate wiener process , i.e. is a gaussian process with and =\min(v , v')\min(y , y') ] .we generate them by using iid normal increments on 1,000 equispaced points in $ ] ( random walk approximation ) .( example [ ex-1 ] shows that for the brownian motion the assumptions of theorem [ th-1 ] are satisfied . )alternatives are obtained by adding the curve after a change point or to the observations in the second sample .the parameter regulates the size of the change or the difference in the means in two samples ..critical values for the distribution of .[ cols="^,^,^,^ " , ]we start with some elementary properties of the projections .let denote the euclidean norm of vectors .[ l-2.1 ] if assumptions [ as-1 ] , [ as-5 ] and [ as-6 ] hold , then [ eq-2.1 ] e_1=*0 * , [ eq-2.2 ] e_1_1^t=*i*_d , where is the identity matrix .moreover , [ eq-2.3 ] e|_1|^3e||z_1||^3 ( _ j=1^d 1/_j)^3/2 and for all [ eq-2.3a ] e|_1,j|^3e||z_1||^3 /_j^3/2 . since ,the relation in ( [ eq-2.1 ] ) is obvious .the orthonormal functions and satisfy ( [ eq - eig ] ) , so we get proving ( [ eq-2.2 ] ) . using the definition of the euclidean norm and the cauchy schwarz inequality we conclude since . taking the expected value of the equation abovewe obtain ( [ eq-2.3 ] ) .clearly , the next lemma plays a central role in the proof of theorem [ approx ] .[ sena ] if assumptions [ as-1 ] , [ as-5 ] and [ as-6 ] hold , then for all we can define independent identically distributed standard normal vectors in such that where is an absolute constant .the result is a consequence of theorem 6.4.1 on p. 207 of and the corollary to theorem 11 in .we note that [ eq-2.4 ] ( e|_1|^3+e|_1|^3)^1/4(e|_1|^3)^1/4+(e|_1|^3)^1/4 . also , since is the sum of the squares of independent standard normal random variables , minkowski s inequality implies [ eq-2.5 ] e|_1|^3c_1d^3/2 , with some constant , and clearly [ eq-2.6 ] d^3/2_1 ^ 3/2(_=1^d1/_)^3/2 . combining lemma [ sena ] with ( [ eq-2.4])([eq-2.6 ] ) ,we conclude that where does not depend on .+ in the next lemma we provide an upper bound for the variance of , where is defined in lemma [ sena ] .[ sena-2 ] if assumptions [ as-1 ] , [ as-5 ] and [ as-6 ] hold , then for any we get where does not depend on .let first we write +e[u_n^2(j)i\{|u_n(j)| > r_n\}]\\ & \leq r_n^2+\frac{2}{n}e\biggl[\biggl(\sum_{i=1}^n\xi_{i , j}\biggl)^2i\{|u_n(j)|>r_n\}\biggl]+\frac{2}{n } e\biggl[\biggl(\sum_{i=1}^n \gamma_{i , j}\biggl)^2i\{|u_n(j)|>r_n\}\biggl].\end{aligned}\ ] ] using hlder s inequality we get that &\leq e\biggl[\biggl|\sum_{i=1}^n\xi_{i , j}\biggl|^3\biggl]^{2/3}\biggl [ p\{|u_n(j)|>r_n\}\biggl]^{1/3}\\ & \leq e\biggl[\biggl|\sum_{i=1}^n\xi_{i , j}\biggl|^3\biggl]^{2/3}r_n^{1/3}\end{aligned}\ ] ] by ( [ eq-2.7 ] ) .applying now rosenthal s inequality ( cf . , p. 59 ) we obtain where is an absolute constant . hence and therefore & \leq c_7(n/\lambda_j)r_n^{1/3}\\ & \leq c_8 n^{23/24}\frac{1}{\lambda_j}\biggl(d^{1/4 } \biggl(\sum_{\ell=1}^d1/\lambda_\ell\biggl)^{3/8}\biggl)^{1/3}.\end{aligned}\ ] ] following the previous arguments one can show that \leq c_9 n^{23/24}\frac{1}{\lambda_j}\biggl(d^{1/4 } \biggl(\sum_{\ell=1}^d1/\lambda_\ell\biggl)^{3/8}\biggl)^{1/3}.\ ] ] the constants and do not depend on . since in view of assumption [ d-2 ], is smaller than the latter rates , this completes the proof of lemma [ sena-2 ] .we use a blocking argument to construct a wiener process which is close to the partial sums .let be the length of the blocks to be chosen later .let .for we write using the s , the independent standard normal random variables constructed in lemma [ sena ] , we define [ wien ] w_j(k)=_i=1^k_i , j,1j d , 1k n. by lemma [ sena-2 ] we get for any and via kolmogorov s inequality ( cf . ) , p. 54 ) one can define independent wiener processes ( standard brownian motions ) such that ( [ wien ] ) holds .we obtained approximations for the partial sums of the s at the points next we show that neither the partial sums of the s nor the wiener processes can oscillate too much between and .+ using again rosenthal s inequality ( cf . , p. 59 ) we obtain for all that on account of lemma [ l-2.1 ] . combining the marcinkiewicz zygmund inequality ( cf . , p. 82 ) with ( [ rose ] ) we conclude [ mz ] e(_1h m| _ i=1^h_i , j|)^3c_12(m/_j)^3/2 . applying ( [ mz ] )we get lemma 1.2.1 of yields now choosing and with , it follows from ( [ kol ] ) , ( [ max-1 ] ) and ( [ max-2 ] ) for all that the result now follows from ( [ last ] ) with first investigate the weak convergence of the process with given by .the difference between and is that is computed from the empirical projections , while is based on the unknown population eigenfunctions .[ pure ] if assumptions [ as-1 ] , [ as-5 ] , [ as-6 ] and [ d-0][d-3 ] hold , then ^ 2,\ ] ] where the gaussian process is defined in theorem [ th-1 ] . to prove theorem [ pure ] , we need several lemmas and some additional notation . let where is defined in ( [ n - sum ] ) and the s are the wiener processes of theorem [ approx ] .it follows from the definition that for each the processes are independent brownian bridges .[ bridge - approx ] if assumptions [ as-1 ] , [ as-5 ] and [ as-6 ] hold , then where and only depend on and first we write since the s are brownian bridges , the distribution of the supremum functional of the brownian bridge ( cf . ) gives where is an absolute constant .now the result follows immediately from theorem [ approx ] .now we prove the weak convergence of the partial sums of the squares of independent brownian bridges .let be independent brownian bridges .[ hahn ] as , we have that ^ 2,\ ] ] where the gaussian process is defined in theorem [ th-1 ] . the proof is based on theorem 2 of .let denote a brownian bridge and .it is clear that for all . according to ,there is a random variable such that for all and let .we note thus we get [ c-1 ] e(v(t)-v(s))^2c_16|t - s|(1/|t - s|)0t , s1 and \leq c_{17}(|t - s|\log(1/|t - s|))^2 \ ] ] for all the estimates in ( [ c-1 ] ) and ( [ c-2 ] ) yield that the conditions of theorem 2 of are satisfied , completing the proof lemma [ hahn ] .it follows immediately from lemmas [ bridge - approx ] and [ hahn ] .the transition from theorem [ pure ] to theorem [ th-1 ] is based on the following lemma , in which the norm is the hilbert schmidt norm .[ dunsch ] if assumptions [ as-1 ] , [ as-2 ] and [ as-5 ] hold , then [ lamb ] are random signs , and are defined in assumption [ d-4 ] .inequality ( [ lamb ] ) can be deduced from the general results presented in section vi.1 of or in .these results are presented in a convenient form in lemma 2.2 in .finally lemma 2.3 in gives ( [ vee ] ) . introducing we can write elementary arguments give by the cauchy schwarz inequality we have [ first ] _j=1^ d|-|u_n(x),_j^2||u_n(x)||^2_j=1^ d and since , [ second ] _j=1^ d(u_n(x),_j^2-u_n(x)-_jv_j^2 ) ||u_n(x)||^2_j=1^ d||_j-_jv_j||^2 .it follows from the results of ( for a shorter proof we refer to theorem 6.3 in ) that due to assumption [ as-6 ] we can use a marcinkiewicz zygmund type law of large numbers for sums of independent and identically distributed random functions in banach spaces ( cf ., e.g. , or ) to conclude assumption [ d-3 ] gives that and therefore by lemma [ dunsch ] so by lemma [ dunsch ] and ( [ first ] ) we have on account of assumptions [ d-1 ] and [ d-3 ] .similarly , ( [ second ] ) and assumption [ d-4 ] yield theorem [ th-1 ] now follows from theorem [ pure ] . by lemma [ bridge - approx ] and , relation is proven if we show that [ cp-1 ] \{_i=1^d _ 0x 1b^2_i(x)-d_0 } n(0,1 ) , where independent brownian bridges .clearly , ( [ cp-1 ] ) is an immediate consequence of the central limit theorem .similarly , to establish , we need to show only that the above result is known , see remark 2.1 in .the same argument can be used to prove .we note that under the null hypothesis define the proof of theorem [ th - two-1 ] is based on lemma [ sena ] , we need to write as a single sum of independent identically distributed random processes and an additional small remainder term .let be an integer and define the integers and .next we define clearly , where we will show first if is a function with , then for every [ e-1 ] e|_=1^nz _ , v|^3c_1n^3/2 and [ e-2 ] e|_=1^nq _ , v|^3c_2n^3/2 , where and only depends on and , respectively . using rosenthal s inequality ( cf . , p. 59 ) we get where is an absolute constant .it is easy to see that which implies ( [ e-1 ] ) .the same argument can be used to prove ( [ e-2 ] ) .+ next we define the function it is clear that is a covariance function and therefore we can find and orthonormal functions satisfying now we define the vector it is easy to see that , are independent and identically distributed random vectors with mean and , where is the identity matrix . also , ( [ e-1 ] ) and ( [ e-2 ] ) imply that where only depends on and using lemma [ sena ] we obtain similarly to ( [ eq-2.7 ] ) that there are independent standard normal random vectors in such that where does not depend on .let it follows from ( [ e-1 ] ) and ( [ e-2 ] ) that with some constant , not depending on we have and therefore by markov s inequality for every [ two - eq-2 ] p\{n^-1/2||>x}c_7(_=1^d1/|_)^3/2 . let next we choose in ( [ two - sena ] ) , ( [ two - eq-2 ] ) and in ( [ two - eq-2 ] ) to conclude that there is , a standard normal random vector in such that where . using the definitions of and , together with assumption [ m-1 ] ,we conclude [ two - eq-4 ] ||c_p - c_n , m||=o(n^-1/4 ) , so by lemma 2.3 of , cf .lemma [ dunsch ] , we have [ two - eq-5 ] |_i-|_i|c_9 ||c_p - c_n , m||=o(n^-1/4 ) . using assumption [ m-3 ]we conclude that hence it follows from ( [ two - eq-3 ] ) and assumption [ m-3 ] that since is a random variable with degrees of freedom , assumption [ m-3 ] yields that it is well known that converges in distribution to a standard normal random variable , and therefore where stands for a standard normal random variable .+ the difference between and is that the projections are done into the direction of different functions ( s and s , respectively ) and the normalizations ( s and s , respectively ) are also different .however , using the marcinkiewicz zygmund law of large numbers in a banach space together with and assumption [ m-3 ] , we obtain that hence , in view of , also and there are random signs such that so repeating the arguments used in the proof of theorem [ th-1 ] , we get completing the proof .a. m. garsia .continuity properties of gaussian processes with multidimensional time parameter . in _ proceedings of the berkeley symp.ath ._ , volume 2 , pages 369374 .university of california press , 1970 .o. gromenko , p. kokoszka , l. zhu , and j. sojka .estimation and testing for spatially indexed curves with application to ionospheric and magnetic field trends ._ the annals of applied statistics _ , 6:0 669696 , 2012 .j. o. howell and r. l. taylor .zygmund weak laws of large numbers for unconditional random elements in banach spaces . in j.kuelbs , editor , _ probability in banach spaces .proceedings of the third international conference held at tufts university , medford , mass ._ , pages 219230 .springer , 1980 .v. m. panaretos , d. kraus , and j. h. maddocks .second - order comparison of gaussian random functions and the geometry of dna minicircles ._ journal of the american statistical association _, 105:0 670682 , 2010 .
|
functional principal components ( fpc s ) provide the most important and most extensively used tool for dimension reduction and inference for functional data . the selection of the number , , of the fpc s to be used in a specific procedure has attracted a fair amount of attention , and a number of reasonably effective approaches exist . intuitively , they assume that the functional data can be sufficiently well approximated by a projection onto a finite dimensional subspace , and the error resulting from such an approximation does not impact the conclusions . this has been shown to be a very effective approach , but it is desirable to understand the behavior of many inferential procedures by considering the projections on subspaces spanned by an increasing number of the fpc s . such an approach reflects more fully the infinite dimensional nature of functional data , and allows to derive procedures which are fairly insensitive to the selection of . this is accomplished by considering limits as with the sample size . we propose a specific framework in which we let by deriving a normal approximation for the partial sum process where is the sample size and is the score of the function with respect to the fpc . our approximation can be used to derive statistics that use segments of observations and segments of the fpc s . we apply our general results to derive two inferential procedures for the mean function : a change point test and a two sample test . in addition to the asymptotic theory , the tests are assessed through a small simulation study and a data example .
|
the minimum time of descent under gravity has historical importance in connection with fermat s principle , a problem that remains ever popular to the readers of general physics matters .our aim here is to propose an experiment for the introductory mechanics laboratory such that the students explore the minimum time curve known as a cycloid ( fig .1 ) themselves . a small billiard ball rolls from rest under gravity from an initial ( fixed ) point o to a final ( fixed ) point a along different paths ( fig . 2 and 3 ) .the relation between its speed and vertical position can easily be found from the energy conservation i.e. or in which and are the kinetic and the potential energy respectively . out of infinite number of possible paths joining o to a we are interested in the one that takes the minimum time .this is one of the typical extremal problems encountered in mechanics under the title of _ brachistochrone problem _ whose solution is given in almost all books of mechanics .the time of slide between o and a is given by in which and is the element of the arclength along the path ( eq . ( 2 ) below ) .note that for a billiard ball , as an extended object with inertia the relation between and modifies into , which does nt change the nature of the minimum time curve .we shall state simply the result : the curve is a cycloid expressed mathematically in parametric form , is the maximum point since is a downward coordinate along the curve . for different pathsthe pathlength of the curve can be obtained from the integral expression it is not possible to evaluate this arclenght unless we know the exact equation for the curve .two exceptional cases , are the straight line and the cycloid . as a curvethe cycloid has the property that at o / a it becomes tangent to the vertical/ horizontal axis .although the lower point ( i.e. a ) could be chosen anywhere before the tangent point is reached , for experimental purpose we deliberately employ the half cycloid , so that identification of the brachistochrone becomes simpler . by using a string and rulerwe can measure each pathlength to great accuracy .the experimental data will enable us to identify the minimum time curve , namely the cycloid .\1 ) a thin , grooved track made of a long flexible metal bar ( or hard plastic ) fixed by clamps .\2 ) a small billiard ball .\3 ) a digital timer connected to a fork - type light barrier .\4 ) string and ruler to measure arclengths .the experimental set up is seen in fig .we note that the track must be at least 2 meters long both for a good demonstration and to detect significant time differences .the track is fixed at a by a screw while the other end of the track passing through the fixed point o is variable this gives us the freedom to test different paths , with the crucial requirement that in each case the starting point o at which the timer is triggered electronically remains fixed .this particular point is the most sensitive part of the experiment which is overcome by using a fork - type light barrier ( optic eyes ) both at o and a. as the path varies the light flash can be tolerated to intersect any point of the ball with a negligible error .let us add also that an extra piece of track at a is necessary to provide proper flattening at the minimum of the inverted half cycloid . from fig .3 , path is identified as a straight line which is added here for comparison with the otherwise curved paths .as we change the path down from to we record the time of each descent by a digital timer .we observe that as we go from to with exact tangential touches to the axes , the time decreases , reaching a minimum at . from on , the time starts to increase again toward with almost tangential touches at a. in this way we verify experimentally that can be identified , as the minimum time curve .by using a string and ruler we measure the length of each path as soon as we record its time of descent .the length of the straight line path , for example will be ( recall from eq .( 2 ) ) from the simple hypotenuse theorem with maximum height . , width=453 ] to .we consider different paths , labelled as , width=453 ] where stands for the maximum height . from fig . 1 and tab .1 we see that is calculated theoretically ( ) and experimentally ( ) , is acceptable within the limits of error analysis .addition of errors involved in the readings of arclengths , time and averaging results will minimize the differences .it should also be taken care that while in rolling , the ball does nt distort the track ..the length and the time taken , corresponding to each path [ cols="^,^,^,^,^,^,^,^,^ " , ] the pathlength of the cycloid i. e. , eq .( 3 ) by substitution from eq . ( 1 ) can be obtained to satisfy all one has to do after taking each time record is to check that the minimum time curve satisfies eq . ( 5 ) , and it is tangent at o / a which characterize nothing but the brachistochrone problem. theoretically we have while experimental value is which implies an error less than one percent . as an alternative method which we tried also to convince ourselves, we suggest to use a digital camera to take the picture of each path and locate them on a common paper for comparison .as noticed , in performing the experiment we have used only half of the cycloid if space is available a longer track can be used to cover the second half , , as well . owing to the symmetry of a cycloid ,however , this is not necessary at all .a cycloid arises in many aspects of life .it is the curve generated by a fixed point on the rim of a circle rolling on a straight line .diving of birds / jet fighters toward their targets , watery sliding platforms in aqua parks are some of the examples in which minimum time curves and therefore cycloids are involved . in comparison with a circle and ellipse , cycloid is a less familiar curve at the introductory level of mathematics / geometry .the unusual nature comes from the fact that both the angle and its trigonometric function arise together so that the angle ca nt be inverted in terms of coordinates in easy terms . yet the details of mathematics which are more apt for the sophomore classes can easily be suppressed .changing the track each time before rolling the ball , measuring both time of fall and length of the curve are easy and much instructive to conduct as a physics experiment .main task the students are supposed to do is to fill the data in table 1 .it will not be difficult for students to explore that cycloid is truly the minimum time curve of fall under constant gravitational field .let us complete our analysis by connecting that simple extension of our experiment can be done by using variable initial points .namely , instead of the fixed point o the ball can be released from any other point between o and a which does nt change the time of fall .this introduces the students with the problem of tautochrone , which is also interesting .
|
we establish an instructive experiment to investigate the minimum time curve traveled by a small billiard ball rolling in a grooved track under gravity . our intention is to popularize the concept of _ minimum time curve _ anew , and to propose it as a feasible physics experiment both for freshmen and sophomore classes . we observed that even the non - physics major students did enjoy such a cycloid experiment .
|
the simulation codes of lattice gauge theory require substantial computing resources in order to calculate various matrix elements with sufficient precision to test the standard model against emerging experimental measurements .historically , these codes have demanded the use of large supercomputers at significant cost .both general purpose commercial supercomputers and custom , or `` purpose - built '' , supercomputers have been employed .traditional supercomputers came with very high prices .the price of purpose - built supercomputer hardware was lower , but the design and construction of such machines required significant amounts of engineering and physicist manpower . in the last half decade , the performance of commodity computing equipment has increased to the point that tightly coupled clusters of such machines can compete with traditional supercomputers in capacity ( lattice size ) and throughput ( mflop / sec ) , and with purpose - built supercomputers in price / performance .commodity systems have been so successful across a wide spectrum of applications in many academic fields , that more than half of the supercomputers listed on the `` top500 '' supercomputer list are clusters . in this paper ,i discuss the requirements placed on clusters by lattice qcd codes and the historical performance trends of commodity computing equipment for meeting those requirements . extrapolating from these trends , together with vendor roadmaps ,allows prediction of the performance and price / performance of reasonable cluster designs in the next few years .inversion of the dirac operator ( _ dslash _ ) is the most computationally intensive task of lattice codes .the improved staggered action ( _ asqtad _ ) will be used throughout this paper for quantitative examples . during each iteration of the inversion of the improved staggered _dslash _ , eight sets of su(3 ) matrix - vector multiplies occur using nearest and next - next - nearest neighbor spinors .when domain decomposition is used on a cluster , ideally these floating point operations overlap with the communication of the hyper - surfaces of the sub - lattices held on neighboring nodes . using global sums ,the results of these sweeps over the full lattice are accumulated and communicated to all nodes in order to modify the spinors for the next iteration ._ dslash _ inversion throughput depends upon the floating point performance of the processors , the bandwidth available for reading operands from memory , the throughput of the i / o bus of the cluster nodes , and the bandwidth and latency of the network fabric connecting the computers . on any cluster , one of these factors will be the limiting factor which dictates performance for a given problem size .minimization of price / performance requires designs which balance these factors .most floating point operations in lattice codes occur during su(3 ) matrix - vector multiplies . for operands in cache, the throughput of these multiplies is dictated by processor clock speed and the capabilities of the floating point unit .table [ matvec ] shows the performance of matrix - vector kernels on four intel processors introduced since the year 2000 .the `` c '' language kernels used are from the milc code .the use of simd instructions on intel - brand and compatible cpus , as suggested by csikor _et al . _ for amd k6 - 2 cpus and implemented for the intel sse unit by lscher , can give significant performance improvements .table [ matvec ] lists the performance of two styles of sse implementation .the first , site wise , uses a conventional data layout scheme with the real and imaginary pieces of individual matrix and vector elements adjacent in memory .the second , fully vectorized , follows pochinsky s practice of placing the real components of the operands belonging to four consecutive lattice sites consecutively in memory , followed by the four imaginary components .whereas site wise implementations require considerable shuffling of operands in the sse registers in order to perform complex multiplies , the fully vectorized form requires only loads , stores , multiplies , additions , and subtractions ..su(3 ) matrix - vector multiply performance .results are given in mflop / sec . [ cols="<,^,^,^",options="header " , ] given the historical performance trends , along with vendor roadmaps , we can attempt predictions of future lattice qcd cluster price / performance .these predictions are based upon the following assumptions : * intel ia32 processors will be available at 4.0 ghz and 1066 mhz fsb in 2005 .* processors will be available either singly at 5.0 ghz , or in dual core equivalence ( _ e.g. _ , dual core 4.0 ghz processors ) in 2006 .* equivalent memory bus speed will exceed 1066 mhz by 2006 through fully buffered dimm technology or other advances .* the cost of high performance networks such as infiniband will drop as these networks increase in sales volume and the network interfaces are embedded on motherboards .the predictions assume that several new technologies are delayed by one year from their first appearance on current vendor roadmaps .for example , vendor roadmaps predict that 1066 mhz memory buses will appear in 2004 , dual core processors in 2005 , and fully buffered dimm technology in 2005 . by year ,the details of the predicted values in fig .[ predict ] , also summarized in table [ tbl_predict ] are as follows . in mid-2004 ,the latest fermilab cluster used 2.8 ghz p4e systems at 900 per node . in late 2004, a cluster based on 3.4 ghz p4e processors with and infiniband would sustain 1.4 gflop / node , based on the faster processors and the improved communications . in late 2005 , a cluster based on 4.0 ghz processors with 1066 mhz fsb would sustain 1.9 gflop / node , based upon faster processors and higher memory bandwidth . in late 2006 , a cluster based on the equivalent of 5.0 ghz processors with memory bandwidth greater than 1066 mhz fsb would sustain 3.0 gflop / node .the network fabrics used on clusters limit both achievable performance and cost effectiveness . as discussed previously , the largest single high performance network switches currently available are 288-port infiniband switches . to build a larger cluster based on such a switched network , cascading of multiple switches is required . to preserve bisectional bandwidth through the fabric , switches in a two - layer cascaded fabric have as many connections to other switches as they do to compute nodes . cascading increases the switch costs of a fabric .toroidal gigabit ethernet mesh designs do not have this limitation .however , the use of ethernet requires custom communications software to replace the traditional tcp / ip communications protocol ; tcp / ip introduces too much latency for lattice qcd codes .in contrast , the communications software which is supplied with networks such as myrinet and infiniband not only is widely used and robust , but it also requires no modification for lattice qcd . in terms of reduced custom software development , significant benefits may be derived from using popular high performance switched networks , even though the hardware costs may be greater . the term `` strong scaling '' refers to the decrease in time to solve a fixed size problem as additional nodes are employed . communications latencies limit strong scaling . as nodecounts increase , the size of the local lattice stored on each node decreases , and so the size of the messages used to communicate neighboring hyperplanes also decreases . because of the dispersion of communications bandwidth with message size caused by latency , the decreasing bandwidth available with shorter messages will eventually limit the performance as the number of nodes increases .the reliability of the nodes in a cluster will limit the length of the longest calculation .typical mtbf figures for commodity computers are of order to hours . for nodes ,an mtbf of hours will result in an average of one hardware failure every 100 hours .operating system stability may play a role as well , with `` mean time between reboots '' similarly dictating maximum job lengths .this problem can be addressed by checkpointing long calculations at regular intervals , so that they may be restored at an intermediate position after cluster repair .note that switched networks are very tolerant of node failure in that a given sublattice may be relocated to any available node in the cluster at the start of the next job .mesh networks , on the other hand , are generally limited to nearest computer neighbor communications unless a large latency penalty is incurred .the loss of a node within one of the dimensions of a mesh architecture requires rewiring to route around the failed computer .since 1999 , pc clusters have exhibited steadily improving price / performance for lattice qcd ; the measured price / performance halving time for improved staggered codes over this time period was 1.25 years .performance trends indicate that balanced designs will be achievable on large scale clusters in the future . with the advent of , i / o bus designs will have more than sufficient bandwidth to match the communications requirements of many future generations of processors .networks such as infiniband similarly have excess bandwidth today , and vendor roadmaps indicate performance growth which will pace or exceed processor requirements .improvements in memory designs should provide sufficient memory bandwidth to balance faster processors .to date , the largest clusters in the us specifically devoted to lattice qcd have been no larger than 256 processors and have been based on myrinet or gigabit mesh networks . based on performance and cost trends ,it is clear that significant clusters will be constructed in the coming years .a 512 processor cluster in 2005 should sustain 1.9 gflop / sec per node on the improved staggered action at less than 0.50/mflop .leveraging the results of the wide spread use of commodity clusters , these facilities will require neither specialized designs nor operational procedures .f. csikor _et al . _ ,* 134 * ( 2001 ) 139 , [ hep - lat/9912059 ] .m. lscher , nucl .b ( proc . suppl . ) * 106 * ( 2002 ) 21 .a. pochinsky , these proceedings . .s. gottlieb , physics.indiana.edu/ sg / pcnets/. z. fodor , s.d .katz , g. papp , comput .* 152 * ( 2003 ) 121 , [ hep - lat/0202030 ] .w. watson , private communication . .
|
in the last several years , tightly coupled pc clusters have become widely applied , cost effective resources for lattice gauge computations . this paper discusses the practice of building such clusters , in particular balanced design requirements . i review and quantify the improvements over time of key performance parameters and overall price to performance ratio . applying these trends and technology forecasts given by computer equipment manufacturers , i predict the range of price to performance for lattice codes expected in the next several years .
|
the dimensions , proportions and physical attributes of a person s face are unique .biometric facial recognition systems will measure and analyze the overall structure , shape and proportions of the face : distance between the eyes , nose , mouth , and jaw edges ; upper outlines of the eye sockets , the sides of the mouth , the location of the nose and eyes , the area surrounding the cheekbones . at enrolment ,several pictures are taken of the user s face , with slightly different angles and facial expressions , to allow for more accurate matching . for verification and identification, the user stands in front of the camera for a few seconds , and the scan is compared with the template previously recorded .benefits of face biometric systems being that it is not intrusive , can be done from a distance , even without the user being aware of it ( for instance when scanning the entrance to a bank or a high security area ) .weaknesses of face biometric systems : face biometric systems are more suited for authentication than for identification purposes , as it is easy to change the proportion of one s face by wearing a mask , a nose extension , etc . also , user perceptions / civil liberty : most people are uncomfortable with having their picture taken. applications of face biometrics include access to restricted areas and buildings , banks , embassies , military sites , airports , law enforcement .one advantage of passwords over biometrics is that they can be re - issued .if a token or a password is lost or stolen , it can be cancelled and replaced by a newer version .this is not naturally available in biometrics .if someone s face is compromised from a database , they can not cancel or reissue it .cancellable biometrics is a way in which to incorporate protection and the replacement features into biometrics .it was first proposed by n. k. ratha , j. h. connell and r. m. bolle .several methods for generating cancellable biometrics have been proposed . the first fingerprint based cancellable biometric system was designed and developed by s. tulyakov , f. farooq and v. govindaraju .essentially , cancellable biometrics performs a distortion of the biometric image or features before matching .the variability in the distortion parameters provides the cancellable nature of the scheme .some of the proposed techniques operate using their own recognition engines , such as a. b. j. teoh , a. goh and d. c. l. ngo and m. savvides , b. v. k. v. kumar and p. k. khosla . whereas other methods , such as m. a. dabbah , w. l. woo and s. s. dlay take the advantage of the advancement of the well - established biometric research for their recognition front - end to conduct recognition .although this increases the restrictions on the protection system , it makes the cancellable templates more accessible for available biometric technologies .the result of applying an edge detector to an image may lead to a set of connected curves that indicate the boundaries of objects , the boundaries of surface markings as well as curves that correspond to discontinuities in surface orientation .thus , applying an edge detection algorithm to an image may significantly reduce the amount of data to be processed and may therefore filter out information that may be regarded as less relevant , while preserving the important structural properties of an image .the edge detection filters used for experimentation are based on discrete laplace operator , sobel operator , roberts cross operator , frei - chen operator and prewitt operator .biometric cryptosystems as discussed by y. dodis , r. ostrovsky , l. reyzin and a. smith , f. hao , r. anderson and j. daugman , k. nandakumar , a.k .jain and s. pankanti , y. sutcu , q. li and n. memon , use techniques that associate an external key with a user s biometric to obtain helper data .the helper data should not reveal any significant information about the template or the key and at the same time it can be used to recover the key when the original biometric is presented .the concept of data hiding in digital watermarks has been discussed by c. t. hsu and j. l. wu .encryption algorithm to secure the image using fingerprint and password has been discussed by manvjeet kaur , dr . sanjeev sofat and deepak saraswat involves more time consuming methods .a hash function is an algorithm that transforms ( hashes ) an arbitrary set of data elements , such as a text file , into a single fixed length value ( the hash ) .the computed hash value may then be used to verify the integrity of copies of the original data without providing any means to derive said original data .this irreversibility means that a hash value may be freely distributed or stored , as it is used for comparative purposes only .sha stands for secure hash algorithm .sha-2 includes a significant number of changes from its predecessor , sha-1 .sha-2 consists of a set of four hash functions with digests that are 224 , 256 , 384 or 512 bits .we have used sha-256 for our experimentation .+ the security provided by a hashing algorithm is entirely dependent upon its ability to produce a unique value for any specific set of data .when a hash function produces the same hash value for two different sets of data then a collision is said to occur .collision raises the possibility that an attacker may be able to computationally craft sets of data which provide access to information secured by the hashed values of pass codes or to alter computer data files in a fashion that would not change the resulting hash value and would thereby escape detection .a strong hash function is one that is resistant to such computational attacks .a weak hash function is one where a computational approach to producing collisions is believed to be possible .a broken hash function is one where a computational method for producing collisions is known to exist . in 2005, security flaws were identified in sha-1 , namely that a mathematical weakness might exist , indicating that a stronger hash function would be desirable .although sha-2 bears some similarity to the sha-1 algorithm , these attacks have not been successfully extended to sha-2 .the advanced encryption standard ( aes ) is a specification for the encryption of electronic data established by the u.s .national institute of standards and technology ( nist ) in 2001 .the algorithm described by aes is a symmetric - key algorithm , meaning the same key is used for both encrypting and decrypting the data .aes is based on a design principle known as a substitution - permutation network , and is fast in both software and hardware .we have used aes to generate key size of 256 bits ( aes-256 ) .high speed and low ram requirements were criteria of the aes selection process .thus aes performs well on a wide variety of hardware , from 8-bit smart cards to high - performance computers ._ step 1 : _ input face image from dataset ( att , yale or ifd ) which is in greyscale ( or converted ) .the att dataset of faces ( formerly the orl database of faces ) , yale dataset and ifd ( indian face dataset ) are unmodified except for conversion to jpeg and renaming of the files .datasets used are att , ifd and yale with sample images shown in figure 3 , figure 4 and figure 5 respectively ._ step 2 : _ apply edge detection filter .the edge detection filters used for experimentation are based on discrete laplace operator , sobel operator , roberts cross operator , frei - chen operator and prewitt operator ._ step 3 : _ invert colors of the image and then auto normalize .this is done because a major drawback to application of the edge detection filters is an inherent reduction in overall image contrast produced by the operation , which is in turn used to become an advantage in our case since it provides obscuring the original image to an acceptable level .normalize stretches the histogram , so the whole range of colors is used as to get more information out of the image .hence by inverting the filtered image and auto normalizing we get the contrast to an acceptable level ._ step 4 : _calculate sha-256 hash value of the final cancellable face image and encrypt it with aes-256 bit cipher .the aes-256 bit cipher is a symmetric key algorithm which uses the same password for encrypting and decrypting ._ step 5 : _ obtain the final filtered face image and store in the corresponding dataset .dataset att - l is the set obtained after applying step 2 with laplace edge detector filter and step 3 on att dataset ( sample shown in figure 6 .dataset att - s is the set obtained after applying step 2 with sobel edge detector filter and step 3 on att dataset ( sample shown in figure 7 ) .dataset att - r is the set obtained after applying step 2 with roberts edge detector filter and step 3 on att dataset ( sample shown in figure 8) .dataset att - f is the set obtained after applying step 2 with frei - chen edge detector filter and step 3 on att dataset ( sample shown in figure 9 ) .dataset att - p is the set obtained after applying step 2 with prewitt edge detector filter and step 3 on att dataset ( sample shown in figure 10).similarly for yale and ifd datasets ._ steps 1 to 3 : _ same as in enrolment ._ step 4 : _ apply face recognition methods to identify the person set . in our experimentthe face recognition methods use the following : pca ( principal component analysis ) , ipca ( incremental pca ) , lda ( linear discriminant analysis ) and ica ( independent component analysis ) ._ step 5 : _ for the set of images of the matched person , verify the sha-256 hash values after decrypting with aes-256 cipher .this step ensures that the stored biometric templates have not been tampered with .the proposed method starts with a non - invertible feature transformation by using edge detection filters and is combined with a key binding biometric crypto system .the sha-256 hash value , which is aes-256 bit encrypted , helps in binding the cancellable template with an encrypted key .the aes-256 bit cipher is a symmetric algorithm and hence uses the same password for encryption and decryption . herethe password that is used to bind the values can be user driven or be at the system level , depending on the feasibility of the biometric system .our proposed method focuses on the roberts cross based edge detector due to its consistently highest matching accuracy across different datasets [ table 1 , table 2 , and table 3 ] .according to roberts , an edge detector should have the following properties : the produced edges should be well - defined , the background should contribute as little noise as possible , and the intensity of edges should correspond as close as possible to what a human would perceive .after applying the edge filter(s ) , the image colors are inverted since edge filters discard other information than the detected edges ( first image of figure 11 ) .the image is then auto normalized edges ( second image of figure 11 ) to the full dynamic range , to further enhance the remaining details . in image processing, normalization is a process that changes the range of pixel intensity values .applications include photographs with poor contrast due to glare , for example .normalization is sometimes called contrast stretching . in more general fields of data processing , such as digital signal processing ,it is referred to as dynamic range expansion .the purpose of dynamic range expansion in the various applications is usually to bring the image , or other type of signal , into a range that is more familiar or normal to the senses , hence the term normalization . often , the motivation is to achieve consistency in dynamic range for a set of data , signals , or images to avoid mental distraction or fatigue .for example , a newspaper will strive to make all of the images in an issue share a similar range of greyscale .normalization is a linear process .if the intensity range of the image is 50 to 180 and the desired range is 0 to 255 the process entails subtracting 50 from each of pixel intensity , making the range 0 to 130 .then for each pixel the intensity is multiplied by 255/130 , making the range 0 to 255 .auto - normalization in image processing software typically normalizes to the full dynamic range of the number system specified in the image file format .we have found through our experiment that when the face images are auto normalized after applying edge filter and inverting colors , matching accuracy increases .[ 0cm][0cm]classifier & + & att & att - l & att - s & att - r & att - f & att - p + ica & 91.3 & 83.1 & 86.9 & 86.9 & 87.5 & 88.8 + ipca & 93.1 & 87.5 & 89.4 & 88.1 & 88.1 & 90.0 + lda & * 94.4 * & * 91.3 * & * 93.1 * & * 93.1 * & * 92.5 * & * 91.9 * + pda & 91.3 & 83.1 & 88.1 & 87.5 & 88.8 & 88.1 + the att dataset comprises of face frontal images with low resolution ( 92x112 pixels ) .the images have dark background ( figure 3 ) with most of it not present in the images by comparison to other datasets . from table 1 , lda based face recognition method is having the best matching accuracy .the proposed method with roberts cross filter and sobel filter are showing the least variation ( 1.3% ) w.r.t . matching accuracy .[ 0cm][0cm]classifier & + & yale & yale - l & yale - s & yale - r & yale - f & yale - p + ica & 83.3 & 85.0 & 81.7 & 78.3 & 86.7 & 81.7 + ipca & 71.7 & 68.3 & 75.0 & 73.3 & 78.3 & 75.0 + lda & * 91.7 * & * 86.7 * & * 88.3 * & * 90.0 * & * 90.0 * & * 90.0 * + pca & 81.7 & 86.7 & 83.3 & 80.0 & 85.0 & 86.7 + the yale dataset comprises of face frontal images with medium resolution ( 320x243 pixels ) . the face images ( figure 4 )have mostly a bright background and a few with shadows .those shadows are subsequently removed due to edge detection filtering . from table 2, lda based face recognition method is having the best matching accuracy .the proposed method with roberts cross filter , frei - chen filter and prewitt filter are showing the least variation ( 1.7% ) w.r.t . matching accuracy .[ 0cm][0cm]classifier & + & ifd &ifd - l & ifd - s & ifd - r & ifd - f & ifd - p + ica & 76.7 & 76.3 & 75.4 & 78.0 & 75.0 & 75.0 + ipca & 76.7 & 86.0 & 86.9 & 86.0 & 86.0 & 86.4 + lda & * 91.1 * & * 89.8 * & * 89.8 * & * 89.8 * & * 89.4 * & * 89.0 * + pca & 76.3 & 72.0 & 76.3 & 75.8 & 76.7 & 77.1 + the ifd dataset comprises of face frontal and some side poses images with high resolution ( 640x480 pixels ) .the face images ( figure 5 ) have mostly a dull background . from table 3, lda based face recognition method is having the best matching accuracy .the proposed method with roberts cross filter , laplace filter and sobel filter are showing the least variation ( 1.3% ) w.r.t . matching accuracy .the facial images across all the datasets used here are taken under controlled conditions and are less susceptible to noise .after applying the edge filter and inverting colors , we have further enhanced the image by auto normalization . the face recognition method which uses lda combined with roberts cross filter in our proposed schemeshows the highest matching accuracy consistently across the wide range of facial images of different types of datasets used .there is insignificant changes w.r.t . matching accuracy ( varying from 1.3% to 1.7% ) across the datasets , with and without our proposed method .template security emphasizes on obscuring the template images , the slight reduction in accuracy is definitely acceptable .robert s filter is mathematically the simplest of all the compared edge detection methods .hence , the proposed scheme has low impact on the speed of execution , hence can be incorporated into existing systems without too much overhead .the proposed scheme can be incorporated at the time of enrolment and verification itself .the filtered images can thus be stored instead of the unaltered face images , thereby providing a form of encryption .since the subsequent images taken for enrolment are binary different even when it is with same camera and lighting conditions , the obscured template will be different .this in turn provides a non - invertible template , where in case the dataset of the filtered images ( used for matching ) is compromised , it can be revoked a new set generated without worrying about misuse of the lost data .also , by varying the convolution kernel values of the robert s filter gradient , more cancellable templates can be generated for a particular face image , as discussed for difference of gaussian edge filter by g. hemantha kumar and manoj krishnaswamy . [cols="<,<",options="header " , ] by storing sha-256 hash of the stored biometric template and encrypting with aes-256 algorithm ( table 4 ) we have provided a strong measure against biometric template tampering .sha-256 hashing and aes-256 cipher can be performed computationally fast ( less than a second ) and hence can be easily incorporated into existing systems . although we have assumed that the attacker will not be able to easily gain access on the various levels to compromise the entire system , even in case the entire system was being compromised ,the cancellable templates can be re - issued which provides new hash values automatically .due to the non - invertible nature of the templates there is no worry of misuse of lost data .other schemes involve calculating the helper data ( in our case the ciphered hash value ) for each set of biometric templates which becomes time consuming during verification .the time taken to decrypt only once enhances the speed of execution and can be incorporated in systems which require speed as well as security .useful scenarios for the proposed method could be in real time systems , banking , atm access , etc .we have shown that the final filtered images itself can be used for face matching instead of unaltered face images .the results are checked across datasets which encompasses a wide variety of images taken under different conditions as well as different resolutions and image quality .we proposed a novel method for generating cancellable face biometrics and to secure the stored templates in a way which is suitable for integration with current face matching systems with acceptable alterations .+ also , by using fast , proven and standard hashing ( sha-256 ) and cryptographic ( aes-256 ) methods for data verification , the vault is further enhanced .we discussed their strengths and shortcomings , as well as their relative performance on different databases under a variety of conditions .the approach allows for enhanced template security , privacy and maintaining good ethics in biometric systems .it is important that biometrics based authentication systems are designed to withstand different sources of attacks on the system . , enhancing security and privacy in biometrics - based authentication systems , " _ ibm systems journal _ , vol .614 - 634 , 2001 . , symmetric hash functions for fingerprint minutiae , " _ proc .intl workshop pattern recognition for crime prevention , security , and surveillance _ ,30 - 38 , 2005 . , random multispace quantization as an analytic mechanism for biohashing of biometric and random identity inputs , " _ pattern analysis and machine intelligence , ieee transactions on _ , vol .28 , pp . 1892 - 1901 , 2006 . , robust shift invariant pca based correlation filter for illumination tolerant face recognition , " presented at _ieee computer society conference on computer vision and pattern recognition ( cvpr04 ) _ , 2004 . , secure authentication for face recognition , " presented at _ computational intelligence in image and signal processing _ ,ciisp 2007 , ieee symposium on 2007 . , neighborhood coding of binary images for fast contour following and general array binary processing , " _ compute , graphics image process _ , pp .127 - 135 , 1987 . , machine perception of threedimensional solids , in optical and electrooptical information processing , " _ mit press , cambridge , ma _ , 1965 ., fast boundary detection : a generalization and a new algorithm . " _ leee trans .c-26 , no .988 - 998 , 1977 . , object enhancement and extraction , in : b.s .lipkin , a. rosenfeld ( eds . ) , " _ picture analysis and psychopictorics , academic press , new york _ , 1970 . , fuzzy extractors : how to generate strong keys from biometrics and other noisy data , " _ tech . rep .235 , cryptology eprint archive _ , 2006 ., combining crypto with biometrics effectively , " _ieee trans .55 ( 9 ) _ , 1081 - 1088 , 2006 . , fingerprint - based fuzzy vault : implementation and performance , " _ ieee trans .forensics security 2 ( 4 ) _ , 744 - 757 , 2007 ., protecting biometric templates with sketch : theory and practice , " _ ieee trans .forensics security 2 ( 3 ) _ , 503 - 512 , 2007 . , hidden digital watermarks in images , " _ ieee trans . on image processing _ , vol .58 - 68 , jan . 1999 . , template and database security in biometrics systems : a challenging task , " published in _ international journal of computer applications _ , ( 0975 - 8887 ) , volume 4 - no.5 , july 2010 .cancellable face biometrics using image blurring , " _ international journal of machine intelligence _ , vol .3 issue 4/5 , p272 , 2011 . , the database of faces , " _ http://www.cl.cam.ac.uk/research/dtg/ + attarchive / facedatabase.html _ , _ http://cvc.yale.edu/projects/ + yalefaces / yalefaces.html _ , the indian face database , `` _ http://vis-www.cs.umass.edu//indianfacedatabase/ _ , '' _ federal information processing standards publication 197 , united states national institute of standards and technology ( nist ) _ ,november 26 , 2001 , retrieved october 2 , 2012 .he was awarded ph.d . in computer science from university of mysore .he has over 200 publications in all leading international and national journals as well as conferences.his current research interest includes numerical techniques , digital image processing , pattern recognition and multimodal biometrics .+ + is a research scholar , department of studies in computer science , university of mysore , mysore , india .his qualifications include b.e .in compsci from r.v.c.e ( b.u . ) and m.tech . in compsci from m.v.j.c.e ( v.t.u . ) + +
|
in this paper we address the issues of using edge detection techniques on facial images to produce cancellable biometric templates and a novel method for template verification against tampering . with increasing use of biometrics , there is a real threat for the conventional systems using face databases , which store images of users in raw and unaltered form . if compromised not only it is irrevocable , but can be misused for cross - matching across different databases . so it is desirable to generate and store revocable templates for the same user in different applications to prevent cross - matching and to enhance security , while maintaining privacy and ethics . by comparing different edge detection methods it has been observed that the edge detection based on the roberts cross operator performs consistently well across multiple face datasets , in which the face images have been taken under a variety of conditions . and we have proposed a novel scheme using hashing , for extra verification , in order to harden the security of the stored biometric templates . + + * keywords :* cancellable biometrics , edge detection , face biometrics , template security .
|
monte carlo techniques for geophysical inversion were first used about forty years ago , , , . sincethen there has been considerable advances in both computer technology and mathematical methodology , and therefore an increasing interest in those methods .some examples can be found in , , , and .there is a class of problems where the number of unknowns is one of the unknowns " . for these problems ,a number of frameworks have been developed since the mid-1990s to extend the fixed - dimension markov chain monte carlo ( mcmc ) to encompass trans - dimensional stochastic simulation . among these trans - dimensional schemes ,the reversible jump markov chain sampling algorithm proposed by is certainly the most well understood and well developed .a survey of the state of the art on trans - dimensional markov chain monte carlo can be found in .trans - dimensional mcmc has been successfully applied to geophysical models , see and . proposed a rj - mcmc algorithm to detect the shape of a geophysical object underneath the earth surface from gravity anomaly data , assuming a two - dimensional polygonal model for the object .although the idea of can in principle be extended to three - dimensional cases with polygons replaced by polyhedrons , in practice much numerical difficulties could be encountered .what is more , an arbitrary three - dimensional real object can not always be presented by a simple polyhedron .another limitation of the development in is that it is not trivial to extend the model to multiple objects .the present paper is the first attempt to invert a three - dimensional magnetic dipole field using rj - mcmc .consider an arbitrary magnetic dipole with magnitude and unit vector , located at , the magnetic field at an arbitrary point is given by where , is the magnetic permeability of free space .consider dipoles , each denoted as , , and located at and with a strength and a direction unit vector .assume measurement locations at , .let .then the magnetic field at due to dipole is given by , and the total magnetic field at measurement point induced by all the dipoles is given by the observed magnetic field at is .assuming an independent gaussian noise with standard deviation in each of the measured components , the likelihood function is then where denotes the model , with each dipole having six parameters representing its direction , strength and location .the first two parameters are the spherical polar coordinates of the unit vector for the dipole , i.e. , , **. * * in a geophysical context , the object creating the magnetic anomaly could be represented by a collection of dipoles with the same orientation and the same strength . in such a casethe parameter vector for a collection of dipoles is , i.e. there are only parameters for the model of dipoles .we now describe a reversible jump mcmc algorithm for the dipole model .first , we describe the within - model moves where the number of dipoles is fixed at , i.e. there are no birth nor death moves .the metropolis - hastings algorithm was first described by as a generalization of the metropolis algorithm , .denote the state vector for the model of dipoles as at step the state vector and we wish to update it to a new state .we generate a candidate from candidate generating density , we then accept this point as the new state of the chain with probability given by where is the likelihood given by ( 3 ) , is the prior density . if the proposal is accepted , we let the new state , otherwise .it is often more efficient to partition the state variable into components and update these components one by one .this was the framework for mcmc originally proposed by metropolis et al .( 1953 ) , and it is used in this work . for each component , we take the normal density as the proposal density , where is the normal density with zero mean and standard deviation .a sensible choice for the values is to let for the two common polar coordinates , for the common magnetic strength , and for all the position coordinates .the reversible jump markov chain monte carlo proposed by provides a framework for constructing reversible markov chain samplers that jump between parameter spaces of different dimensions , thus permitting exploration of joint parameter and model probability space via a single markov chain .as shown by , detailed balance is satisfied if the proposed move from to is accepted with probability , with given by where is the probability that a proposed jump from to is attempted , is a proposal density , and is the jacobian of the deterministic mapping .efficiency of rj - mcmc depends on the choice of mapping function and the proposal density .typically , a birth - move is from to , i.e. in the above description we have and . in order to have a reasonable acceptance rate for the birth move, we try to keep the change in the likelihood function from to to be small , i.e. the birth - move is designed in such a way that . to achieve this small perturbation in likelihood function , instead of randomly adding a new dipole to the system, we replace one of the existing dipoles with two new dipoles .ideally , the combined magnetic field produced by the two new dipoles should be very close to the magnetic field of the replaced dipole , at every measurement point , .it is analytically difficult to ensure this closeness of magnetic field at every measurement point .we can simplify the problem by ensure that the magnetic field produced by the new pair of dipoles is close to that of the old dipole at one key measurement point , .typically the measurement points can be arranged in a horizontal ( rectangular lattice , , as shown in figure 1 , and can be chosen to be located at the centre of the lattice . in figure 1 ,the key measurement point is marked as a. assuming the randomly chosen dipole is located at point b with coordinate vector , we wish to find two locations near b such that the new pair of dipoles located at these two points will produce a combined magnetic field close to that of the old dipole .let the two new locations be e and d for the new dipoles and , as shown in figure 1. denote the vector , where is the distance between a and b and is the unit vector from b to a ( from dipole to measurement point ). now extend to such that {2}\times r_{b , a } { \rm { \bf \hat r}}_{b , a} ] times that of the length of .now we put two dipoles at the same location c. we now can easily show that a pair of dipoles co - located at c produce a combined magnetic field ( all 3 components ) at measurement point a identical to that of dipole , given that all dipoles have the same strength and unit vector . applying field equation ( 3 ) to dipole and measurement location ,we have similarly , applying field equation ( 3 ) to dipole and measurement location a , we have because {2}\times r_{b , a} ] ; 3 .put two dipoles at location c ; 4 .generate three independent random variables , , from normal distribution ; 5 .move one of the two dipoles from c to location e , such that .this new dipole is denoted as located at ( point e in figure 1 ) ; 6 . move the other dipole from c to location d , such that .this new dipole is denoted as located at ( point d in figure 1 ) .the random vector corresponding to the birth - move from to is identified as with a single parameter , the standard deviation for the random walk of a dipole .thus to find the jacobian of the deterministic mapping , we first find the mapping function where {2} ] .3 . put one dipole at location b , where is the coordinate of point b. in the above one - for - two death - move , the only random number is from uniform .the probability of making the specific death - move is , where is the probability of attempting a general death - move .the mapping function is the inverse of the mapping function . combining birth - move and death - moveas described above , we obtain the following expressions for acceptance rates : * _ birth - move acceptance rate _ * * _ death - move acceptance rate _ * consider three cases : 1 - a bulky formation ; 2 - a thin plate ; 3 - two objects . in each casewe start with a single dipole , located at an arbitrary depth below the measured magnetic field , and with an arbitrary orientation and a fixed strength . * case 1 . * in this case the dipoles form a regular cube .figure 2 shows a sample after 50000 simulations . in figure 2 ,the red balls represent the true model , and green balls are the best prediction .the horizontal blue lattice indicates measurement points , and the lines originating from these points are the magnetic vectors with green corresponding to the green dipoles ( the predicted dipoles ) and red corresponding to the red diploes ( the true model ) . as can be seen , on the whole, the inversed dipoles reasonably assemble the true model , with a few dipoles drifting to the deeper depth .the predicted magnetic vector field matches that of the true model very well - the green vectors and red vectors appear to be the same everywhere on the measurement lattice .* case 2 . * in this case , the dipoles form a horizontal thin sheet , as shown by the blue balls in figure 3 . as seen in figure 3 ,the resulting dipoles are too much scattered vertically .nevertheless , the horizontal scattering of the dipoles resemble the true model , and the resulting magnetic field vector matches the true vector field very well .* case 3 . * in this case the dipoles form two separated identical cubes at a significantly different depths and horizontal locations .it can be seen there are still too many dipoles scattered in between the two objects , and already the fit between the predicted and measured magnetic vector fields are very good .the three test cases show that the present method is promising but some challenges remain . for a single cube - like object, the inverse is not too bad in terms of representing the overall shape of the object , although there seem to be always some dipoles scattered below the object . if the single object is a bit extreme , such as a horizontal thin sheet , the inverse can not predict the depth resolution it is too scattered vertically . for a more challenging problem of two objects located at different depth, the inverse tried hard to locate both , but with too many dipoles scattered in between .in all the above cases the forward problem was well resolved i.e. the predicted magnetic field agrees very well with measured field .this is typical in geophysical inversion non - uniqueness or ill - conditioning is demonstrated in terms of large uncertainties in the prediction of depth .we have proposed a reversible jump markov chain monte carlo ( rj - mcmc ) algorithm for both the magnetic vector and its gradient tensor to deal with this trans - dimensional inverse problem where the number of unknowns is one the unknowns .a special birth - death move strategy is designed to obtain a reasonable rate of acceptance for the rj - mcmc sampling .some preliminary results show the strength and challenges of the algorithm in inversing the magnetic measurement data .although it is very difficult , if not impossible , to predict each individual dipole accurately , it is important to predict the cloud of dipoles accurately ( e.g. uniformly distributed with the edges close to the true boundary ). a different likelihood function or prior ( in addition to new method ) may be of help in this regard .as always , it is difficult to predict the depth of the object with reasonable certainty .better ways are needed to locate multiple objects - with a clean break between two distinct objects , especially when they are located at very different depths .this project was funded by the capability development fund of csiro earth science and resource engineering .
|
we consider a three - dimensional magnetic field produced by an arbitrary collection of dipoles . assuming the magnetic vector or its gradient tensor field is measured above the earth surface , the inverse problem is to use the measurement data to find the location , strength , orientation and distribution of the dipoles underneath the surface . we propose a reversible jump markov chain monte carlo ( rj - mcmc ) algorithm for both the magnetic vector and its gradient tensor to deal with this trans - dimensional inverse problem where the number of unknowns is one of the unknowns . a special birth - death move strategy is designed to obtain a reasonable rate of acceptance for the rj - mcmc sampling . typically , a birth - move generates an extra dipole in the field . in order to have a reasonable acceptance rate for the birth move , we try to keep the change in the likelihood function due to the extra dipole to be small . to achieve this small perturbation in likelihood function , instead of randomly adding a new dipole to the system , we replace one of the existing dipoles with two new dipoles . ideally , the combined magnetic field produced by the two new dipoles should be very close to the magnetic field of the replaced dipole , at every measurement point . it is analytically difficult to ensure this closeness of magnetic field at every measurement point . we can simplify the problem by ensure that the magnetic field produced by the new pair of dipoles is close to that of the old dipole at one key measurement point , for example at the centre of the measurement range . typically the measurement points can be arranged in a horizontal rectangular lattice and that key point can be chosen to be located at the centre of the lattice . we show that for any randomly chosen dipole to be removed , we can place two dipoles with the same strength at a special location such that the magnetic field at the key point remain exactly the same as before this two - for - one replacement of the birth move . the two new dipoles are then separated by random moves similar to that of a within - model move . the death move is simply the reverse of the birth move . some preliminary results show the strength and challenges of the algorithm in inverting the magnetic measurement data through dipoles . starting with an arbitrary single dipole , the algorithm automatically produces a cloud of dipoles to reproduce the observed magnetic field , and the true dipole distribution for a bulky object is better predicted than for a thin object . multi - objects located at different depths remain a very challenging inverse problem . magnetic dipoles , markov chain monte carlo , reversible jump , trans - dimensional , inverse problem
|
as the article of w. e. lamb of which it uses the name , this article warns the non - specialists against the use of the word `` photon '' . for this purpose : - it is initially shown that a ( semi-)classical electrodynamics founded on rigorous mathematics , for example by using the mathematical , general definition of the modes of a linear field , interprets the experiments always correctly . - while w. e. lamb et al .are mainly interested in laboratory experiments , writing , for instance _ the free - electron laser is an excellent example of how the photon - picture can obscure the physics _ we develop the example of astrophysics in which the erroneous use of the photon , regarded as a real particle , invalids efficient theories for the benefit of unfounded , too wonderful theories .the usual description of classical electrodyamics fills often books , mixing strange concepts as `` radiation reaction '' with poorly defined concepts as `` optical modes '' .this lack of rigour has fed critics of the semi - classical theory : considering wrongly the field of density of electromagnetic energy as a linear field , the classical theory fails to explain experiments using photon counting , so that quantum electrodynamics seems better . on the contrary ,quantum electrodynamics is unable to interpret the too fast start up of a laser . in the whole paper, we apply strict mathematics to few , well defined concepts . subsection [ linmax ] shows that the linearity of maxwell s equations in the vacuum may be extended in matter .subsection [ modes ] uses this linearity to define precisely the vector space of the solutions of maxwell s equations , to introduce its metric from energy , obtaining an infinity of sets of orthogonal modes among which `` normal modes '' are chosen .subsection [ residuel ] reminds that the small , relatively distant electric charges introduced by atomic theory are unable to absorb completely the fields emitted by other similar charges , so that we live in a residual field ( that the charlatans claim to detect ) .subsection [ planck1911 ] reminds that planck proposed to correct his initial formula to obtain the absolute electromagnetic field from which is defined the correct field of electromagnetic energy .subsection [ einstein ] uses einstein s theory of light - matter interactions without any other hypothesis , in particular to explain the too fast start up of the lasers .subsection [ param ] studies a parametric light - matter interaction in which the coherence is kept by the use of incoherent light rather than by the short pulses used by g. l. lamb .a correct use of the photon concept is so difficult that , joking , w. e. lamb suggested `` that a licence be required for use of the word photon '' .section 3 describes errors commonly done using the photon , in particular their disastrous consequences in astrophysics .subsection [ monte ] shows how the standard use of monte - carlo computations applied to the photons propagating in a resonant gas , in place of einstein theory , destroys beautiful explanations of the generation of dotted rings .subsection [ multip ] introduces a multiphotonic parametric scattering which explains the observed weakness of very hot stars .subsection [ spont ] shows how the rejection of parametric interactions of light with matter breaks the scale of astronomical distances and founds the big - bang theory .the needed tools are : the principles of thermodynamics . a set of equations named `` maxwell s equations in the vacuum '' which are linear equations describing the propagation of the electromagnetic field in an unlimited space - time , without matter .the equation setting the density of electromagnetic energy as a quadratic function of the field . the equations giving the field emitted by an accelerated electric charge or a small set of charges ( multipole ) .einstein showed that the emissions of the field result from an amplification of an initial , exciting field . the atomic theory which sets that the electric charges ( electrons , nucleus ,... ) are small at the scale of their distances .planck s law corrected by its author in 1911 and approved by einstein and stern .the equations of the electromagnetic field in vacuum , united under the name of maxwell s equations are linear , so that any linear combination of electromagnetic fields is an electromagnetic field .matter is usually introduced as a continuous medium , through the approximation of permeability and permittivity , fields often depending non - linearly on the em field .we need to keep the linearity in matter .we can calculate the delayed electromagnetic field radiated by an accelerated charge . by time reversal ,the delayed field becomes an advanced field . by subtraction of the second system from the first ,the charges are removed , while the field remains unchanged in the future .thus , fokker and schwarzshild preserve the linearity of maxwell s equations by replacing the charges by advanced fields , which modify only the boundary conditions of the equations .the solutions of a system of linear equations are represented by points of a vector space .we set that the norm of a solution of maxwell s equations is its electromagnetic energy _ calculated assuming no other electromagnetic field_. a mode is a set of solutions that differ only by a multiplicative real constant , it is represented by a radius of . scalar products and orthogonality of solutions and modesare deduced from the norms .as maxwell s equations are defined in the whole space - time , is an infinite dimensions space .a set of orthogonal modes may be qualified `` normal '' .its choice is arbitrary though it may be justified in small , well defined systems : in his paper , w. e. lamb showed on examples of laboratory experiments that finding a low dimension subspace orthogonal to the remainder of is not easy when the use the powerful tools of su2 algebra introduced by quantum electrodynamics seems useful .lasers provide almost `` monomode light rays '' .they should be perfectly monochromatic beams resulting from the limitation of a plane wave by a hole and diffraction .an astronomer is interested in the beams of light defined by the entrance pupil of a telescope and the figure of diffraction of a far point ( star ) .these beams are not monomode because the usual light sources are time - incoherent , emitting pulses whose spectral widths corresponds to pulses long of a few meters .such a pulse propagating in a single - mode beam defines an optical mode which may be qualified normal .but , the modes observed by two close telescopes observing the same star are not orthogonal : their phases and their fluctuations ( less affected by the atmosphere ) are correlated .the atomic theory shows that the sizes of the electric charges ( electrons , nucleus ... ) is much smaller than their distances . to absorb the field emitted by a charge , it is necessary to generate an opposite field using other charges .the amplitude of the electromagnetic field emitted by a charge is much larger close to it than the field emitted by other charges in similar conditions of radiation : thus , a large number of other charges is needed , it seems that it remains a `` residual field '' . as the problem is very complex ,the residual field can not be computed directly .although we use sometimes the qualifier `` stochastic '' for the residual field , this qualifier is bad close to sources of light which increase the mean value of the residual field .it is this increase which allows some atoms to jump over a pass to a new state . in response to numerous criticisms , such as non equivalence of energy of a mode to for a large temperature , planck amended his law in 1911 , obtaining the absolute spectral radiance inside a black body : the formula remains valid if a beam escapes to a transparent medium through a small hole of the blackbody .thus , the `` planck s temperature '' of a light beam may be defined without black body , from its spectral radiance and its frequency . in a blackbody at the absolute zero temperature, the radiance has a mean value which is identified with the residual radiance therefore named also `` zero point radiance '' .a stranger name of this field is `` quantum field '' while it was discovered and evaluated by planck before the birth of quantum theory .einstein showed by thermodynamics that the interaction of an homogeneous resonant medium with light may change the amplitude of the light but not the wave surfaces .this result extends to the spontaneous emission which appears as an amplification of the zero point field .thus , along a monomode light ray propagating in homogeneous enough media , the spectral radiance may change , increasing , in particular from , or decreasing , in particular , to this minimal value .if the radiance becomes much larger than , the light ray is called `` superradiant '' . in a small volume , a raywhose radiance is maximal pumps more energy than the others , so that the others remain weak . in a large volume ,only the strongest beams remain : it is the competition of the superradiant modes , observed in the lasers , where they show orthogonal modes .the study of the output of a starting laser tube seems to show that the amplification coefficient of the tube is twice larger for the spontaneous emission than for the established laser beam .the ( semi-)classical explanation is simple , it does not require any ad - hoc concept : when the laser works , an atom is excited by a field that it is able to absorb , that is by a `` spherical '' ( dipolar , quadrupolar ... ) field .the plane wave of the laser must be projected onto the spherical absorbable wave , and a diffracted wave .then , the amplified wave providing an exchange of energy , is spherical , it must be split into the plane wave and a diffracted wave which , considering various atoms , is incoherent , thus disappears ( huygens ) . at the start, the atom is excited by the zero point field in the `` spherical '' mode of the atom , so that there is a single loss of amplitude in the chain of fields .consider two nondegenerate levels of atoms whose populations are in the high state of a transition of frequency and in the low state .thermodynamics introduces a `` boltzman temperature '' verifying the equation ] is close to 1 . assuming that is small : \nonumber\\ a\approx a_0[\sin(\omega t)\cos(k'\epsilon\omega t)+ \sin(k'\epsilon\omega t)\cos(\omega t)]\nonumber\\a = a_0\sin[(\omega+k'\epsilon\omega)t.\label{eq5 } \end{aligned}\ ] ] is an infinitesimal term , but the hypothesis small requires that the raman period is large in comparison with the duration of the experiment .stokes contribution , obtained replacing by a negative , must be added .assuming that the gas is at equilibrium at temperature , is proportional to the difference of populations in raman levels , that is to - 1 \propto \omega / t$ ] . and obey a relation similar to relation [ indice ] , where raman polarisability which replaces the index of refraction is also proportional to the pressure of the gas and does not depend much on the frequency if the atoms are far from resonances ; thus , and are proportional to , and to .therefore , for a given medium , the frequency shift is : the relative frequency shift is nearly independent on .it must be integrated along a path of the light ray , setting d .hypothesis small requires that raman period is large in comparison with the duration of the light pulses ; to avoid large perturbations by collisions , the collision - time must be larger than this duration .this is a particular case of the condition of space coherence and constructive interference written by g. l. lamb : the length of the pulses must be shorter than all relevant time constants .refraction is a parametric interaction , an interaction in which the atoms return to their original state . to obtain a balance of energy with raman resonances , so that the interaction is parametric , at least two rays of light must be involved .the coldest rays receive energy lost by the hottest .the thermal background radiation provides cold isotropic rays .their irradiance is large .the path needed for a given ( observable ) red - shift is inversely proportional to . at a given temperature , assuming that the polarisability does not depend on the frequency , and that and may be chosen as large as allowed by lamb s condition , this path is inversely proportional to the cube of the length of the pulses : an observation , easy in a laboratory with femtosecond pulses , requires astronomical paths with the nanosecond pulses of ordinary incoherent light .the 1420 mhz spin recoupling resonance of hydrogen atoms is too high , but the frequencies 178 mhz in the 2s state , 59 mhz in 2p state , and 24 mhz in 2p are very convenient : the coherent raman effect on incoherent light ( creil ) is this parametric transfer of energy between beams propagating in _ excited _ atomic hydrogen .the photons result from a quantization of normal modes of the electromagnetic field .as these modes are chosen arbitrarily , a photon is not an absolute thing .it is a pseudo - particle which can not be exported from the system for which it was designed .but in astrophysics , it is considered as a particle , all wave optics disappears ... using einstein s theory , now usual in laser spectroscopy , has presented great difficulty for most physicists who claimed townes maser will not work . but astrophysicists continue to follow menzel s sentence : the so - called stimulated emissions which have here been neglected should be included where strict accuracy is required .* it is easily proved * , however , that they are unimportant in the nebulae . for menzel ,the photon is a small particle which interacts with a single atom , losing the phase of its pilot wave . to calculate the propagation of light in a resonant medium without coherence ,the astrophysicists apply the method of monte carlo to photons . for each atom ,the interaction is modeled by a statistical law and the conditions of interaction are drawn .this method is efficient in problems where the interaction of a particle with an atom is complex , so the phase of the pilot wave of the particle does not play a significant role .the monte carlo gives excellent results in the calculation of the interaction of neutrons with uranium atoms , or photons with the inhomogeneities of an opalescent medium like a cloud .but wave optics , particularly the theory of refraction shows that the phase of the these photons is only lost when density fluctuations produce rayleigh or raman incoherent sources of blue sky . in a dilute gas , the most frequent density fluctuations are binary collisions , whose number per unit volume is inversely proportional to the square of the density .contrary to the opinion of menzel , it is incoherent interactions that can not occur in the nebulae .one could argue that the atmosphere is a transparent medium .but the refraction of light in resonant homogeneous environments such as colored liquids or glasses does not show a significant scattering .strmgren has studied a model consisting of a vast cloud of very low pressure , initially cold hydrogen in which a star is extremely hot . in the vicinity of the star, hydrogen is fully ionized into protons and electrons so it is completely transparent if rare collisions are neglected . by increasing the distance to the star ,traces of atoms appear .these atoms radiate and cool the gas , accelerating exponentially the production of atoms .strmgren shows the formation of a relatively thin spherical shell that absorbs star s radiation and re - emits the atomic hydrogen lines into all directions .this shell is seen as a disk , particularly bright near the limb . in the years immediately following the explosion of supernova 1987a , a region in the shape of an hourglass and scattering fairly high light could be observed and measured by comparing travel times of direct and indirect light ( photons echo ) .later , a system of three discrete rings ( pearl necklace ) appeared .burrows and al . have verified that the geometry of these rings could be interpreted by a emission of the hourglass near its limbs .thus , they likened the hourglass to a strmgren shell distorted by variations in gas density .but the rings are very thin so that , without superradiance , they have not been able to keep this interpretation .strmgren did not take into account the possibility of a strong induced emission , i.e. a strong superradiance .let s break the strmgren s shell into infinitesimal shells centred at .set the distance of a beam of light from the star . for small, the angle of incidence of a beam on an infinitesimal shell is an increasing function of , so the beam path and its amplification in each shell are increasing functions of .if is larger than the radius of the outer shell , the amplification is zero . thus there is at least a maximum amplification .choose the smallest , setting the radius of the strmgren sphere .the most intense rays , tangent to this sphere and emitted into a given direction , are generators of a cylinder of revolution seen as a circle .as the increase of radiance of a ray is proportional to its initial radiance , the most intense rays , tangential to the sphere of radius , absorb at each point more energy than others whose radiance is lower .thus , there is a single maximum .this competition of modes works also on the cylinder defined by a given direction , so that the circle is seen dotted . by light diffraction , the dotted circle gets the appearance of tem(l , m ) modes of a laser for which has a nonzero value imposed for example by a circular screen .hydrogen emits several spectral lines , an observation in black and white blends multiple monochromatic systems .the dots ( modes ) are not independent because , for example , emitting a superradiant line depopulates the upper level of the transition , which favours emissions with this level as lower level .however , the complexity of the superradiant system does not hide the analogy of the central ring of supernova 1987a with the emission of some bad , multimode lasers .the absorption spectrum of atomic hydrogen obtained with a low radiance source shows only the lines of hydrogen , so that only a low fraction of the energy is absorbed . with the high radiance rays from the star , multiphoton interactions involving a few virtual levels allow for full absorption to resonant final states .although superradiant rays are far from achieving the radiance of the stellar emission , they depopulate the excited levels intensely , so that complete cycles of white light absorption and emission lines of hydrogen become virtual .they form a _multiphoton scattering induced by superradiant rays_. suppose that the induced scattering is efficient enough to reduce the temperature of the stellar rays to the order of magnitude of the temperature of the superradiant lines .a volume with the dimensions of the thickness of the strmgren s shell has in all tangential directions a luminance a bit lower than the remaining luminance of the star .if this volume is much larger than the volume of the star , it radiates more than the star which is no longer visible .suppose that light crosses a huge amount of cold gas outside the shell .a parametric effect may split the frequency of an hydrogen superradiant line into the resonant frequency of some atom or molecule and the frequency of an idler .the length of the path may allow the generation of weaker , sharp lines emitted collinear with the hydrogen lines .concluding , strmgren s and burrows models explain the geometry of three rings of supernova remnant 1987a from a previously observed scattering region in the shape of an hourglass .superradiance explains the sharpness of the rings and their discontinuities .parametric interactions explain the high brightness of the rings by a transfer of most of the energy radiated by the star into the rings .a similar ring was observed for the planetary nebula iphasxj194359.5 + 170901 .similar superradiances can probably replace gravitational effects which require proper alignment of massive stars ( einstein cross , etc ... ) . at a distance from the star slightly less than , the plasma contains few , very excited , hydrogen atoms that may slightly amplify a light ray propagating at distance from , in particular , its lyman alpha line .this low , spontaneous emission is a rapidly growing function of . at very low pressure ,the incoherent scatterings are negligible . on the contrary , the parametric, coherent interactions are not disturbed by collisions , they can be intense .in particular , the scattering of light by hydrogen atoms in the 2s or 2p states exchanges energy between present radiations , causing an increase of entropy and frequency shifts . by this creil effect ,the slightly amplified ( spontaneously emitted )rays absorb energy lost by the stellar radiation and lose energy absorbed by the continuum , so that their frequencies are shifted .what is the balance ?the irradiance of the continuum is high , so we can assume that the balance is negative for spontaneous emission .the intensity of the amplification is an increasing function of , while the redshift increases with the path to outside , therefore decreases with . for ,the emitted intensity is maximum and observed at the laboratory frequency . at this point , the intensity drops sharply to 0 at higher frequencies ( fig.1 , a ) .this fall , called lyman break is observed in the spectra of objects called far galaxies . for slightly above ,the density of excited atomic hydrogen , initially quite high to start superradiance falls fairly quickly by induced emissions .simultaneously , the induced scattering decreases the radial propagation speed of light energy well below the speed of light .thus , a large hot irradiance provides much energy to the spontaneously emitted rays , whose spectrum is shifted towards shorter wavelengths ( fig .the amplifications and frequency shifts depend on . in the observation ,the spectra add for various values of .in particular , the observed lyman break is not very sharp .michael et al . observed this lyman alpha spectrum inside the main ring of snr1987a ( fig 1,c ) .they could not interpret this spectrum by an expansion of the universe applied to a very distant star observed behind snr1987a because the solid angle of observation of the ring is relatively large .a doppler effect of gas emitters was not plausible because the speed should be too fast .it only remained the assumption of a redshift by propagation in hydrogen , but the monte - carlo computation , shows a high peak , cut by the recorder ( fig 1,d ) . the coherent , parametric , raman effect corrects simply their spectrum improving their attribution of the large frequency shifts to an interaction with hot hydrogen , implicitely , rather than to an expansion of the universe .indeed , the high redshifts interpreted by the expansion of the universe , coincide _ always _ with a presence of very hot hydrogen : this pseudo - expansion can be local , producing a distortion of the distance scale which inflates our charts of the galaxies and gives them their spongy aspect . the located shifts of frequency , depending slightly on the frequency , explain the spectra of stars like the quasars , the `` anomalous acceleration '' of the pioneer 10 and 11 probes , the spectra of the solar emission lines in the far ultra - violet .it may be a disaster for the foundations of the big bang theory : did a lot of people work on a theory founded on sand ?quantum electrodynamics is used to correct common errors in the use of classical electrodynamics : confusion of the field of density of electromagnetic energy with a linear field , use of a relative field to compute the density of energy , study of the absorption of a field by complex , approximate computations in place of the trivial addition of an opposite field ... but the introduction of the the pseudo - particle `` photon '' is the source of errors whose _ practical _ consequences are serious .quantum electrodynamics is founded on quantification of normal modes whose selection among an infinity of systems of orthogonal modes is arbitrary .thus the notion of photon should be used with so much caution that it seems desirable to reject it and , for instance , to interpret the propagation of light in a resonant medium from einstein s theory .the appearance of the rings was simultaneous with the disappearance of the star because an induced multiphotonic scattering transfers almost all radiation of the star to the rings .this explains the persistent radiation of the rings .michael et al . showed that the spectrum observed inside the rings must result from an interaction of light with the plasma of hydrogen , but their incoherent redshift process was too weak , so that they observed a strong peak at the resonance frequency .the coherent raman parametric effect involving the hyperfine frequencies of excited hydrogen atoms explains the blue - shifted peak of the spectrum , the fast decrease of its intensity at the short wavelength side , and the regular , slow decrease at the other side .supernova snr1987a is an example of the utility of optical coherence in the interpretation of the aspect and the spectra of many punctuated rings , surrounding a central star possibly masked , observed in the sky .but the most important consequence is the negligence of a parametric interaction between luminous rays , catalyzed by excited atomic hydrogen ; this interaction often leads to a redshift considered as due to an expansion of the universe .this pseudo - expansion may be local , w. e. lamb , jr . , anti - photon .applied phys b * 60 * ( 1995 ) 77 - 84 w. e. lamb , jr ., w. p. schleich , m. o. scully , c. h. townes , laser physics : quantum controversy in action .reviews of modern physics * 71 * ( 1999 ) s263-s273 .a. einstein , zur quantentheorie der strahlung .z. * 18 * ( 1917 ) 121 - 128 .planck m. , eine neue strahlungshypothese .* 13 * ( 1911 ) 138 - 175 .a. einstein & o. stern , einige argumente fr die annahme einer molekularen agitation beim absoluten nullpunkt .annalen der physik * 345 * ( 1913 ) 551 - 560 g. l. lamb jr . ,analytical description of ultra - short optical pulse propagation in a resonant medium , rev .* 43 * ( 1971 ) 99 - 124 .d. h. menzel the dilution of radiation in a nebula pasp * 43 * ( 1931 ) 70 - 74 .b. strmgren , the physical state of interstellar hydrogen , astrophys .j. , * 89 * ( 1939 ) 526 - 547 .b. e. k. sugerman , a. p. s. crotts , w. e. kunkel , s. r. heathcote , & s. s. lawrence , the three - dimensional circumstellar environment of sn 1987a .arxiv : 0502378 ( 2005 ) . c. j. burrows , j. krist , j. j. hester , r. sahai , j. t. trauger , k. r. stapelfeldt , j. s. gallagher iii , g. r. ballester , s. casertano , j. t. clarke , d. crisp , r.w .evans , r. e. griffiths , j. g. hoessel , j. a. holtzman , j. r. mould , p. a. scowen , a. m. watson , & j. a. westphal , hubble space telescope observations on the sn 1987a triple ring nebula apj , * 452 * ( 1995 ) 680 - 684 r.l.m .corradi , l. sabin , b. miszalski , p. rodriguez gil , m. santander - garcia , d. jones , j. drew , a. mampaso , m. barlow , m.m .rubio - diez , j. casares , k. viironen , d.j .frew , c. giammanco , r. greimel , s. sale , the necklace : equatorial and polar outflows from the binary central star of the new planetary nebula iphasxj194359.5 + 170901 arxiv:1009.1043 ( 1009 ) e. michael , r. mccray , r. chevalier , a. v. filippenko , p. lundqvist , p. challis , b. sugerman , s. lawrence , c. s. j. pun , p. garnavich , r. kirchner , a. crotts , c. fransson , w. li , n. panagia , m. phillips , b. schmidt , g. sonneborn , n. suntzeff , l. wang , & j. c. wheeler , hubble space telescope observations of high - velocity ly and h emission from supernova remnant 1987a : the structure and development of the reverse shock , astrophys . j. , * 593 * ( 2003 ) 809 - 830 . j. moret - bailly , `` propagation of light in low - pressure ionized and atomic hydrogen : application to astrophysics '' , ieeetps , * 31 * ( 2003 ) 1215 - 1223 . j. moret - bailly , anomalous frequency shifts in the solar system , ( 2005 ) arxiv:0507141 . j. moret - bailly , the parametric light - matter interactions in astrophysics , ( 2006 ) aip conference proceedings , * 822 * 228 - 236
|
quantum electrodynamics corrects miscalculations of classical electrodynamics , but by introducing the pseudo - particle `` photon '' it is the source of errors whose practical consequences are serious . thus w. e. lamb disadvises the use of the word `` photon '' in an article whose this text takes the title . the purpose of this paper is neither a compilation , nor a critique of lamb s paper : it adds arguments and applications to show that the use of this concept is dangerous while the semi - classical theory is always right provided that common errors are corrected : in particular , the classical field of electromagnetic energy is often , wrongly , considered as linear , so that bohr s electron falls on the nucleus and photon counting is false . using absolute energies and radiances avoids doing these errors . quantum electrodynamics quantizes `` normal modes '' chosen arbitrarily among the infinity of sets of orthogonal modes of the electromagnetic field . changing the choice of normal modes splits the photons which are pseudo - particles , not physical objects . considering the photons as small particles interacting without pilot waves with single atoms , astrophysicists use monte - carlo computations for the propagation of light in homogeneous media while it works only in opalescent media as clouds . thus , for instance , two theories abort while , they are validated using coherence and einstein theories , giving a good interpretation of the rings of supernova remnant 1987a , and the spectrum found inside . the high frequency shifts of this spectrum can only result from a parametric interaction of light with excited atomic hydrogen which is found in many regions of the universe .
|
in nature , organisms whose sizes differ by many orders of magnitude have been observed to switch between different modes of movement .for instance , the bacterium _ escherichia coli _ changes the orientation of one or more of its flagella between clockwise and anticlockwise to achieve a _ run - and - tumble _ like motion . as a result , during the runs , we see migration - like movement and during the tumbles , we see resting or local diffusion behaviour . to add to this complexity , it should be noted that the direction of successive runs are correlated . on a larger scaleone could consider migratory movements of vertebrates where individuals often travel large distances intermittent with stop - overs to rest or forage .an example , used in this paper , is the lesser black - backed gull ( _ larus fuscus _ ) .individuals of this species that breed in the netherlands migrate southwards during autumn .even though the scales involved in these two processes differ by many orders of magnitude , one can use a similar mathematical framework to model the observed motion . the use of mathematical models to describe the motion of a variety of biological organisms , including bumblebees , plants and zebra has been the subject of much research interest for several decades .early approaches were predominantly centred on the position jump model of motion , where agents instantaneously change position according to a distribution kernel and are interspersed with waiting periods of stochastic length .the position jump framework suffers from the limitation that correlations in the direction of successive runs are difficult to capture , this correlation however is present in many types of movement .furthermore , the diffusive nature of the position jump framework results in an unbounded distribution of movement speeds between successive steps . a related framework that is arguably more realistic for modellingthe motion of organisms is the velocity jump ( vj ) model , in which organisms travel with a randomly - distributed speed and angle for a finite duration before undergoing a stochastic reorientation event . in most formulations of the velocity jump process, there is an assumption that events occur as a poisson process , which is manifested as a constant rate parameter in the resulting differential equation . in the positionjump framework , non - exponentially distributed wait times and non - gaussian kernel processes have been formulated , although this led to fractional diffusion equations .recently , it has become clear how to extend the velocity jump framework to allow for more general distributions of interest . in many velocity jump models, it is assumed that resting states are largely negligible , this can be attributed to a focus on organisms with only momentary resting states , this has the benefit of alleviating some mathematical complexity whilst not changing the result significantly .however , in the work by othmer and erban , it was shown that resting states can be included and are sometimes required in order to obtain adequate fits to experimental data .our goal in this paper is to extend the work by friedrich _ et . to allow for resting states - which are non - negligible - following the methodology of othmer .the mathematical complexity of friedrich s model is such that finding solutions analytically or numerically is , in general , impractical . in the original paper , simplificationswere made which led to a fractional kramers - fokker - planck equation , which has a known analytic solution .however , the simplifications relevant to a physical system are seldom relevant to a biological one .for instance , the original formulation related to non - gaussian kinetics in a weakly damped system ; however , we are considering self - propelled particle models where biological agents generate their own momentum . in the absence of such obvious simplifications for our system ,we instead exploit methods to extract summary statistics from the governing equations , which may in turn be compared with experimental data . after presenting the model of interestwe derive the mean squared displacement ( msd ) .as we have high - quality data available relating to the movement of _ e. coli _ and _ l. fuscus _ , we show that the msd for the model and experimental data align .what is novel about our approach is that , provided the two discrete modes of operation constitute a good model , the parameters can be extracted on a microscopic scale prior to any numerical solution and then macroscopic behaviour can be derived _ without _ optimising or trying to fit data _a posteriori_. since the dynamics of the experimental data and those of the generalised velocity jump model achieve a close match , we explore numerically tractable simplifications to the equations of interest .most notably , we investigate the cattaneo approximation , following the work by hillen .finally , it should be noted that the model presented does not take into account interactions between biological agents or even interactions with the environment .whilst such effects are beyond the scope of the current study , it should be possible to extend the theory to incorporate these phenomena .in particular , the velocity jump process has roots in kinetic theory and as such , similar to how atoms attract and repel one another , models have been developed for biological agents to act comparably .equally , there is similar work detailing interactions between a biological agent and its environment , both fixed environments and signalling via diffusing chemical gradients .consider a biological agent that switches stochastically between running and resting behaviour . during a running phase, the organism travels with constant velocity ; during a resting phase , it remains stationary . upon resuming a run following a rest ,a new velocity is selected randomly .this motion is governed by three primary stochastic effects .we specify these by probability density functions ( pdfs ) , as given below .waiting time : the time spent during a resting phase , denoted is governed by the pdf , where .running time : the time spent during a running phase , denoted is governed by the pdf , where .reorientation : we allow velocities from one run to another to be correlated from before to after a rest .suppose the previous running phase ( pre - rest ) had an associated velocity , for velocity space in spatial dimensions , then we write the new ( post - rest ) velocity as which is newly selected upon entering a running phase .the selection of is dependent on and is governed by the joint pdf .we assume that this reorientation pdf is separable , so that where is a vector of length containing angles and is the speed . in two dimensions ,the turning kernel is decomposed as follows ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the angle distribution : , requires the normalisation .the speed distribution : , requires the normalisation . __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ to further reinforce the process we are describing , we give a simple gillespie algorithm for generating a sample path up until time .it should be noted that the sample path will need to be truncated as it will generate positions past the end time .choose state of particle , for instance , assume particle has just initiated a running state . by considering the density of particles in a running state and the density of particles in a resting state ,we can write down coupled differential equations for these states . we define to be the density of particles at position , with velocity at time and , the density of those particles resting at , having just finished a jump of velocity .note that this encodes an orientation to the resting state .the derivation for this two - state generalised velocity jump process through the use of laplace transforms is provided in appendix [ appa ] .our analysis leads to the following equations and where the delay kernels , for , are defined in laplace space by where is the laplace transform of the pdf for the running and waiting time respectively . when the waiting time is chosen as exponential , where is the dirac delta function . ], this is consistent with work by othmer and rosser .finding closed forms of is non - trivial for most choices of distribution . in appendix[ appb ] , we examine the small time behaviour of and identify the sizes of potential impulses at . for the remaining non - singular behaviour , in the caseswhere we know the laplace transform of , we then have an analytic expression for , which can be inverted numerically using either a talbot inversion or an euler inversion .equations ( [ p_forward][r_forward ] ) give us a system of delay - integro - partial differential equations with degrees of freedom . with this level of complexity ,a full analytic or numerical solution is impractical without first making simplifications .we therefore first consider how to estimate the second spatial moment , i.e. the mean squared displacement . for the test function , we consider for arbitrary density , this gives the expected value of over the space at time t , weighted by density . by using test functions , we associate as the number of particles in state and then , and as the mean squared displacement , the mean velocity - displacement and the mean squared velocity weighted by , respectively .we can then obtain a closed system of integro - differential equations for these quantities .it first requires however , that we make some assumptions on the turning kernel . by considering that the mean post - turn velocity has the same orientation as the previous velocity , we define the index of persistence via the relation informally, this means that turning angles between consecutive velocities have zero mean .we also require that the average mean squared speed is a constant this corresponds to a memoryless turning kernel in speed , i.e. .finally , for unconstrained motion where , we see that delays in space correspond to inclusion of other moments , i.e. and similarly for conservation of mass , i.e. , we see that equally , we obtain a system of equations for the mean squared displacement {\text{d}}s , \\ + \int_0^t \phi_\omega(t - s )d_r^2(s ) { \text{d}}s = - \frac{{\text{d}}d_r^2(t)}{{\text{d}}t}. \nonumber\end{aligned}\ ] ] for the mean velocity - displacement , we see that { \text{d}}s , \\ + \psi_d \int_0^t \phi_\omega ( t - s ) b_r(s ) { \text{d}}s , \nonumber\end{aligned}\ ] ] and { \text{d}}s.\ ] ] finally , for the second velocity moment : equations ( [ msd_first_eq])([msd_last_eq ] ) above correspond to a system of 8 equations , or 7 unique equations once we impose conservation of mass . in the next section , we solve these equations numerically , the integrals are calculated using the trapezium rule along with a crank - nicholson scheme for the remaining differential operators , both of these methods are second - order accurate .in this study , we consider experimental data relating to the bacterium _e. coli _ and the lesser black - backed gull _l. fuscus_. both of these exhibit somewhat similar behaviour , however at scales many orders of magnitude apart .there is a large collection of work relating to studying the run - and - tumble motion as exhibited in many flagellated bacteria . a case of particular interest to many is _ e. coli _ , perhaps due to the fact that its internal signalling pathways are less complex than those of other chemotactic bacteria .most available literature points to both the running and resting times being exponentially distributed .this exponential parameter can change as a response to its environment and has led to a multitude of papers showing that this mechanism leads to chemotaxis either towards nutrients or away from toxins .r0.4 figure1.pdf ( 1,24 ) ( 33,0 ) in our case however , we do not consider _e. coli _ in any chemical gradient but just swimming freely .the dataset used here has previously been described in studies by rosser _ et ._ . in brief , the data was obtained by performing video microscopy on samples of free - swimming _ e. coli _ , from which tracks were extracted using a kernel - based filter .the tracks were subsequently analysed using a hidden markov model to infer the state ( running or resting ) attributed to the motion between each pair of observations in a track . from the annotated tracks ,it is possible to extract the angle changes observed between running phases and parameters for the exponential running and waiting pdfs along with speed distributions . in figure [ vonmises_ecoli ] , we see that from run - to - run , the distribution of angles is approximately a wrapped normal distribution . for mathematical ease ,consider the von mises distribution as an approximation as plotted in red and given by the probability density function where is the modified bessel function of order zero . by assuming , i.e. symmetry around the previous direction , we can specify , and find through maximum likelihood estimation .it has been shown that for the choice of a von mises distribution , in two dimensions ( ) , the index of persistence is given by .figure2.pdf ( 3,10 ) ( 44,0)time ( s ) it should be noted that from the literature , _ e. coli _ is thought to have a bi - modal distribution around the previous direction , the validity of this is hard to confirm as previous data was hand annotated and it is hard to specify the state of the bacterium when diffusion effects are also in place . whilst we had more data available to us and used automated tracking methods , it could well be that our method heavily biases walks towards normally distributed reorientation . through the hmm techniqueas outlined in , estimates for the exponential parameters were found to be and . the mean squared speed whilst running was also calculated to be . in figure[ msd_ecoli ] , we plot the mean squared displacement over time .we clearly see that over the average of 1868 paths , we get a very good match between theory and experiment .we note that the videos were taken from a fixed position , where bacteria would swim in and out of the shot . by considering the average speeds of _e. coli _ along with the size of the viewing window , one can stipulate that by only considering the msd before 4 seconds , we can achieve a good estimate .note that we lose a small amount of data over time as bacterium swim out of the observation window , at later times this ruins the validity of the msd curve .in this section we consider lesser black - backed gulls that breed on texel ( the netherlands ) . during their non - breeding period ( august to april ) , these birds interchange between localised movements ( or resting ) and long distance movements ( migration ) . during the resting mode birds travel up to 50 km butreturn to a central place every day , whereas during the migration mode birds do not return to the central place and can travel several hundreds of kilometers per day .one point of interest is that whilst the resting periods can last months on end , the migrations may only last for a few days on end .see figure [ bird_sample_path_map ] for a section of a sample path centred around london .l0.4 the bird tracking data were collected by the uva - bits system and contains tracks gathered from 10 birds over the months july until january in the years 2012 and 2013. approximately every few hours minutes . ], a recording is taken of a global time - stamp along with the bird s current latitude and longitude coordinates . to identify the state of a given bird ,we create a signal centred around a time point of interest which we threshold to determine whether the bird is either undergoing local or migratory behaviour . by considering all gpscoordinates in a 24 hour window , we calculate the diameter of the convex hull ( or diameter of a minimum bounding circle ) of the set by using the haversine formula .this signal is sampled times a day .if the value of this signal is low , points are clustered together ( local resting behaviour ) otherwise they are spread apart ( migratory behaviour ) . at the cost of including some erroneousexceptionally short rests , we can set a low threshold value of ; the presence of short rests is then fixed by discarding any resting phases shorter than 2 days . in comparison ,the running periods can virtually be of any length as there have been instances of a bird flying exceptionally long distances over a week .figure4.pdf ( 5,12 ) ( 39,0)time ( days ) as we only had the data for birds available , we divided their sample paths up into day intervals after approximating distributions of interest , leading to calculation of the msd over 62 sample paths .in contrast to the _ e. coli _ dataset , we see that running and waiting times are non - exponentially distributed .the distribution of running and waiting times were approximated by inverse gaussian distributions and .the speed distribution gave an estimate for the mean squared running speed as and again using a von mises distribution in angle , we find . in figure [ msd_gulls ], we plot the mean squared displacement in kilometres squared against time in days .as there were fewer sample paths available , the empirical mean square displacement curve is not very smooth and as a result the agreement with the theoretical curve is less good than in the bacterial case .however as the majority of the gulls were in a resting state to begin with , we do capture the initial delay before a linear growth stage . as the gulls are frequently resting as opposed to migrating , we see the data for the gulls ( in blue ) undergoing a style of step function where a small number of gulls undergoing fast movement quickly changes the msd for the whole population . as the number of sample paths increases ,this effect will smooth out .with both examples , as time passes , we see that for large .it is well known that linear mean squared displacement corresponds to the solution of the diffusion equation , or at least diffusive - like behaviour .it would now be pertinent to see if a diffusion approximation could be found for large time .we now construct a large time effective diffusion equation . by first considering equations ( [ p_forward])([r_forward ] ), we transform into laplace space , where large values of correspond to small values of the laplace variable .we then carry out a taylor expansion of the delay kernels to remove the convolutions in time ( see equations ( [ p_laplace])([r_laplace ] ) in appendix [ appa ] for details ) . converting back to the time domain , one obtains and there are now two further steps to obtain an effective diffusion equation .first , by considering successively greater monomial moments in the velocity space , one obtains a system of -equations where the equation for the time evolution of moment corresponds to the flux of moment .it therefore becomes necessary to ` close ' the system of equations to create something mathematically tractable .we use the cattaneo approximation for this purpose .once a closed system of equations has been found , we then carry out an asymptotic expansion where we investigate the parabolic regime to obtain a single equation for the evolution of the density of particles at large time .note that it would be possible to carry out a similar process for smaller time behaviour by taylor expanding the spatial delays in the convolution integrals .asymptotic analysis would then have to be carried out to simplify the remaining convolution .we can multiply equations ( [ large_time_p_for])([large_time_r_for ] ) by monomials in and integrate over the velocity space to obtain equations for the velocity moments the equations relating the terms are given below . for initial integration over the velocity space ,we see and when summing equations ( [ m0_p ] ) and ( [ m0_r ] ) , we see that mass flux is caused by the movement of particles in the running state only , i.e. for multiplication by and integrating , we obtain equations and we would now like to approximate the term to close the system . we make use of the cattaneo approximation to the velocity jump equation as studied by hillen . for the case where the speed distribution is independent of the previous running step , i.e ,we approximate by the second moment of some function , such that has the same first two moments as and is minimised in the norm weighted by . this is essentially minimising oscillations in the velocity space whilst simultaneously weighting down speeds which would be unlikely to occur .we introduce lagrangian multipliers and and then define by the euler - lagrange equation , we can minimise to find that we now use the constraints to find and . for have where is the -sphere centred at the origin .notice also that the by symmetry .for the first moment , we calculate where is the closure of , i.e. the ball around the origin . therefore , we can stipulate the form for as we now approximate the second moment of by the second moment of . so in the above equations , we simply approximate .finally , we rescale our equations using the parabolic regime for arbitrary small parameter . by putting our variables into vectors and , we drop the hats over the rescaled variables and rewrite our equations as where ^t ] .our time derivative matrices are given by , \quad b = \left [ \begin{array}{cc } 1 + \bar{\phi}_\tau ' ( 0 ) & - \psi_d\bar{\phi}_\omega ' ( 0 ) \\ - \bar{\phi}_\tau ' ( 0 ) & 1 + \bar{\phi}_\omega ' ( 0 ) \\ \end{array } \right],\ ] ] our flux matrix is given as .\ ] ] finally our source terms are \quad d = \left [ \begin{array}{cc } - \bar{\phi}_\tau ( 0 ) & \psi_d \bar{\phi}_\omega ( 0 ) \\\bar{\phi}_\tau ( 0 ) & - \bar{\phi}_\omega ( 0 ) \\ \end{array}\right]\ ] ] by using the regular asymptotic expansion for and , we obtain the set of equations providing , solving these in order gives rise to the differential equation for total density for we now wish to find the values of and . for probability distributions defined over the positive numbers with pdf , we see that the laplace transform can be taylor expanded as for small .therefore , by putting these terms into the expression given by equation ( [ f_psi_conversion ] ) , provided that all moments are finite , we see that for mean and variance of distribution , therefore .\ ] ] it is noteworthy that the variance of the running time distribution contributes to the diffusion constant , while it is independent of the variance of the waiting time distribution . furthermore , when the running time distribution is exponentially distributed , the correction is identically zero .so we can view our diffusion constant as the contribution from the exponential component of the running time distribution , plus an additional term for non - exponential running times .when referring back to the experimental data , it can be seen that by the end of the seconds , the _ e. coli _ has entered into the diffusive regime with .the _ l. fuscus _ however is yet to reach this state ; we can predict that when it does , the corresponding value of the diffusion constant will be , the solution of the mean squared displacement equations for greater time periods suggests that this is true .we now carry out a comparison between the underlying differential equation and gillespie simulation . in figure [ comp_1 ], we see the solution to the diffusion equation on the plane for a delta function initial condition , which takes the form of a bivariate gaussian , compared with data simulated using the algorithm given in section [ twostategvj ] . for the gillespie simulation, all sample paths are initialised at the origin with fixed speed equal to unity and uniformly random orientation , half the sample paths are initialised in a run and half are initialised in a rest .therefore all plots will have the parameters , and we specify , plots are shown at . on the top row, we see a diffusion approximation on the left compared with a velocity jump process where both the running and waiting times are sampled from an exponential distribution , with the mean of these distributions as stated , our effective diffusion constant for large time is . on the bottom row, we see a diffusion approximation on the left compared with a velocity jump process where the running time is distributed , giving and , the diffusion constant is therefore .the waiting time is distributed ; the high variance of the waiting time is chosen such that the simulation relaxes towards the diffusion approximation quickly .it was seen from numerical simulations that there there is a relationship between the choice of distribution for the waiting time and the index of persistence which will encourage the system to relax rapidly into the parabolic regime .it is necessary for the system to relax quickly in order for us to use the diffusion approximation as a valid method of comparison .the choices of these two distributions was chosen to illustrate the importance of the diffusion correction term .this is illustrated in figure [ comp_1 ] by the difference between the top and bottom rows , which differ only in this correction term .runs were carried out with half initialised in a running phase and half initialised in the resting phase.,title="fig:",scaledwidth=30.0% ] runs were carried out with half initialised in a running phase and half initialised in the resting phase.,title="fig:",scaledwidth=30.0% ] runs were carried out with half initialised in a running phase and half initialised in the resting phase.,title="fig:",scaledwidth=3.4% ] + runs were carried out with half initialised in a running phase and half initialised in the resting phase.,title="fig:",scaledwidth=30.0% ] runs were carried out with half initialised in a running phase and half initialised in the resting phase.,title="fig:",scaledwidth=30.0% ] runs were carried out with half initialised in a running phase and half initialised in the resting phase.,title="fig:",scaledwidth=3.4% ] another point of interest is that one can model distributions other than exponential with different means and still achieve the same effective diffusion constant through careful selection of variance .an example is shown in figure [ comp_2 ] , where the diffusion constant is recovered by changing the running distribution to .this then gives a mean run time of and variance and compares well to the first row in figure [ comp_1 ] .for this simulation , of the sample paths were initialised in a run and the remainder in a resting state so that the system was again encouraged to relax quickly .in this study , we have used a single modelling framework to describe two highly distinct biological movement processes , occurring in bacteria and birds . in spite of the significant mechanistic differences between the two species ,their phenomenological similarities nonetheless persist over length scales of 10 orders of magnitude .we recover the correct behaviour including the non - local delay effects due to non - exponential waiting times .this formulation could be considered a particularly phenomenological approach as it outlines a way for observables to directly parameterise movement equations .this is counter to some previous literature where quantities such as diffusion constants were left to the reader to identify .a notable advantage of the modelling framework proposed here is the straightforward interpretation of the distributions and parameters involved , all of which have naturally intuitive meanings .there is , unfortunately , no unified approach to extract such quantities of interest from biological movement data .this was demonstrated in section [ compexptheo ] , in which different approaches were taken to obtain the required parameters .nonetheless , such methods are the focus of much current research effort , and we therefore believe that approaches such as ours will become increasingly relevant in the future . as far as the authors are aware , there has been no unified approach to tackling this problem .finally , we demonstrated the novel result that for the underlying stochastic process of interest , the variance of the running time contributes to the large time diffusion constant .this raises the key question : when does the parabolic regime emerge ?our results also act as a warning against using the exponentially distributed running times as an approximation for other distributions , as whilst their mean values may align , the underlying dynamics can change drastically as shown with the link between figures [ comp_1 ] and [ comp_2 ] .regarding the accuracy of this generalised velocity jump framework , it should be realised that the underlying models for the examples given could be improved by making the model more specific to the agent of interest .below we discuss some possible alterations to the model - however at the cost of species generality . for _ e. coli _ ,the bacterium is always subject to diffusion ; in theory , this should add to its mean squared displacement while resting and may also affect running phases via rotational diffusion .if one wanted to incorporate a small fix to the resting state , it would be simple to add a diffusion term in space to equation ( [ r_forward ] ) .however , for a more comprehensive solution to the problem , to retain the correlation effects with turning kernel , the equation ( [ p_forward ] ) would have a rotational diffusion term added , which is achieved via a laplacian in the velocity space .furthermore , equation ( [ r_forward ] ) would have to have to retain its defunct velocity field for orientation but also to include another velocity variable to allow for movement due to diffusion .a particularly interesting result would be to explore whether the gaussian - like reorientation were a resultant effect from this rotational diffusion . by testing differing viscosities of fluid for the swimming _, one could undoubtedly make headway using this approach , work already initiated by rosser _ et ._ . for the _ l. fuscus _, there are many physical and ecological phenomena which could to be built into the model ; these range from the day - night cycles , in which the bird is reluctant to fly long distances through the night , to geographical effects , where the bird may follow the coastline for navigation .one could also consider environment factors , such as wind influence and availability of food resources . in the work by chauviere _ , the authors consider the migration of cells along an extra - cellular matrix . using a similar formulation to ours , but only considering exponentially distributed waiting times , cells are modelled to preferentially guide themselves along these extra - cellular fibres .it would not be difficult to imagine modifying this work to show how gulls may align their trajectory along coastlines , or using other geographical markers .we can motivate the set of equations ( [ p_forward]-[r_forward ] ) by considering the temporary variables : * the density of particles at position , with velocity at time , _ having just started a jump_. * the density of particles at position , _ having just finished a jump _ of velocity at time and _ just started a rest_. this leads to densities : * the density of particles at position , with velocity at time , being in a running state .we should note that we can relate to via the equation for being the probability that a jump lasts longer than , clearly .* the density of particles at position , having just finishing a jump of velocity at time in a resting state .equally , there is the relation between and , this is for being the probability that a rest lasts longer than , again . by assuming that at time , all particles are initiated into the beginning of a run with distribution , we can relate to previous times by the relationship again by assuming that particles initiated into the beginning of a rest with distribution , there is the recursive relation for taking the laplace transform in time of equations ( [ p_in_eta ] ) and ( [ r_in_nu ] ) , we find equally , taking the laplace transform of ( [ eta_in_nu ] ) and ( [ nu_in_eta ] ) , we see noting that in laplace space by eliminating and , we derive paired differential equations in laplace space and where , as stated previously ] must be satisfied by - whether this is useful or not is another question ! ] reverting back to the temporal variable , we obtain equations ( [ p_forward][r_forward ] ) .the integro - differential equations for mean squared displacement , or indeed any other differential equation above , can now be easily solved by a variety of methods for numerical integration . however , in the case where one of the waiting times is exponentially distributed , has been shown to become a multiple of the delta - function at the origin .it can be seen that many other distributions also have a numerical impulse at the origin , numerically integrating over an impulse is often difficult if not impossible so we carry out asymptotic analysis to evaluate the magnitude of said impulse . to investigate the small time behaviour of , we shall consider the small time behaviour of then transform to laplace space to consider large behaviour and subsequently switch back . by assuming the expansion of to be of the form then subsequently in laplace space using the relation ( [ f_psi_conversion ] ) and considering the minimal contribution of the denominator , we find in the case where , we can subsequently invert to find from the above analysis , it should be clear we can expect an impulse at the origin for the case when or with . by integrating between and , we see that { \text{d}}t \sim f_0 + f_1 \varepsilon^\alpha \quad \text{for } \alpha > 0,\text { as } \varepsilon \rightarrow 0 .\ ] ] , _ mathematical modeling of collective behavior in socio - economic and life sciences _ , birkhuser , 2010 .chapter : `` particle , kinetic , and hydrodynamic models of swarming '' , by j. a. carrillo , m. fornasier , g. toscani and f. vecil .
|
there are various cases of animal movement where behaviour broadly switches between two modes of operation , corresponding to a long distance movement state and a resting or local movement state . here a mathematical description of this process is formulated , adapted from friedrich _ et . al . _ . the approach allows the specification _ any _ running or waiting time distribution along with any angular and speed distributions . the resulting system of partial integro - differential equations are tumultuous and therefore it is necessary to both simplify and derive summary statistics . an expression for the mean squared displacement is derived which shows good agreement with experimental data from the bacterium _ escherichia coli _ and the gull _ larus fuscus_. finally a large time diffusive approximation is considered via a cattaneo approximation . this leads to the novel result that the effective diffusion constant is dependent on the mean and variance of the running time distribution but only on the mean of the waiting time distribution .
|
peaks over threshold modelling of univariate time series has been common practice since the seminal paper of , who advocated the use of the asymptotically motivated generalized pareto ( gp ) distribution as a model for exceedances over high thresholds .the multivariate generalized pareto distribution was introduced in , ( * ? ? ?* chapter 8) , and , but still , statistical modelling using this approach has thus far received relatively little attention .partially this is because theoretically equivalent dependence modelling approaches , based on the so - called `` point process approach '' , have already been in existence for some time .nonetheless , the multivariate gp distribution has conceptual advantages over that of the point process representation , in so much as it represents a proper multivariate distribution on an `` l - shaped '' region , where at least one variable is extreme , ; see figure [ fig : supp ] .furthermore , the gp distribution permits modelling of data on this region without the need to perform any marginal transformation , which is common in other extremal dependence modelling approaches .the justification for the use of multivariate gp distributions as models for the tails of essentially arbitrary distributions is the limit property in .there is a growing body of probabilistic literature devoted to multivariate gp distributions ( e.g. , , , , ) . to our knowledge , however , there are only a few papers that exploit these as a statistical model . this paper forms a companion to , which collates new and existing probabilistic results on multivariate gp distributions .we briefly recall necessary elements from this paper , but our main focus is on statistical modelling and inference .the premise of statistical modelling using limiting distributions such as the gp distribution is that _ we assume the limit model holds sufficiently well in a sufficiently extreme region_. care is needed , however , to ensure that the postulated model is indeed appropriate .an important consideration when modelling multivariate extremes is the concept of asymptotic dependence .random variables and with distribution functions and , respectively , are said to be asymptotically dependent if = \frac{{\mathbb{p}}[f_1(y_1 ) >q , f_2(y_2 ) > q]}{1-q } \to \chi_{1:2 } > 0 , \qquad q\to1.\end{aligned}\ ] ] existence of the limit is an assumption , and the limit being positive characterizes asymptotic dependence , whereas defines asymptotic independence .an extension of to a general dimension is given by }{1-q } \to \chi_{1:d}.\ ] ] when , there is a positive probability of the most extreme events occurring simultaneously in all variables .multivariate gp distributions are useful mostly when .when then the corresponding _ limiting _ gp distribution does not place any mass in -dimensional space , with all the mass lying in some lower dimensional subspace instead . this situation is challenging to deal with , as in practice the data do belong to the full -dimensional space .whilst modelling of such possibilities is not precluded in the censored likelihood framework that we adopt , we do not consider it further here . yetother subtleties arise , since even when , this does not rule out the possibility of a distribution placing mass on some lower - dimensional subspace .however , for the remainder of the paper , we consider the simplified situation where there is no mass on any lower - dimensional subspace . the contributions of this paper are the following . in section[ sec : background ] we present some of the key results and properties of multivariate gp distributions that are useful for statistical modelling . in section [ sec : models ] we introduce a construction device for gp - distributed random vectors and use it to develop a variety of new and existing models in section [ sec : examples ] .inference using a censored likelihood - based approach is detailed in section [ sec : inference ] , together with threshold selection and goodness - of - fit diagnostics . in section [ sec : applications ] we fit gp models to returns of four uk - based banks and to rainfall data in the context of landslide risk estimation , showing that multivariate gp modelling can be more useful for financial risk handling than one - dimensional methods , and that our models can respect physical constraints in a way which was not possible before .we conclude with a discussion in section [ sec : discussion ] .let be a random vector in with distribution function .a common and broadly - applicable assumption on is that it is in the so - called _ max - domain of attraction _ of a multivariate max - stable distribution , .this means that if are independent and identically distributed copies of , then one can find sequences and such that \to g(\bm{x } ) , \label{eq : maxstabconv}\end{aligned}\ ] ] with having non - degenerate margins . in and throughout ,vectors are boldface , and operations involving vectors are to be interpreted componentwise , with shorter vectors being recycled if necessary .the resulting max - stable distribution has marginal location , scale and shape parameters denoted , and , respectively , and the lower endpoints of its support are determined by these parameters . if denotes this vector of lower endpoints , and , its components are if and otherwise .we assume , which is always possible through appropriate choice of and .max - stable distributions are not a primary concern of this paper , but the above mild assumption leads us to an analogous convergence theorem for multivariate threshold exceedances . specifically , if convergence holds , then where follows a multivariate gp distribution .we let denote the distribution function of , and its marginal distributions .typically the margins are not univariate gp , due to the difference between the conditioning events and in the one - dimensional and -dimensional limits. however , the marginal distributions conditioned to be positive are gp distributions .that is , writing , we have = ( 1+\gamma_j x /\sigma_j)^{-1/\gamma_j}_+,\end{aligned}\ ] ] where and are as previously defined , giving the link between marginal parameterizations for the two convergences . the full link between and , and we say that such a and are _ associated_. the support of the multivariate gp distribution is included in the set the dependence structure of does not have a finite - dimensional parameterization , but it does satisfy certain properties that can be used to construct flexible parametric models . in section [ sec : examples ] we give several examples , some of which are gp models that are associated to well - known max - stable models .following common practice in the statistical modelling of extremes , may be used as a model for data which arise as multivariate threshold excesses in the sense . in particular ,if is a threshold that is `` sufficiently high '' in each margin , then from , can be approximated by a member of the class of multivariate gp distributions , with , , the marginal exceedance probabilities , and the dependence structure to be estimated . in practicethe truncation by the unknown vector is only relevant when dealing with mass on lower - dimensional subspaces .following the discussion in section [ sec : extremaldependence ] , we suppose that is to be approximated by a gp distribution .a member of the class of gp distributions has a representation on as where is a `` standard form '' gp random vector , that is , a gp vector on a standardized scale and .its construction will be further discussed in section [ sec : models ] . for , the corresponding component of the right - hand side of equationis simply .the following are useful properties of the gp distributions ; for further details and proofs we refer to .[ [ sec : ts ] ] threshold stability .+ + + + + + + + + + + + + + + + + + + + gp distributions are _ threshold stable _ , meaning that if follows a gp distribution with marginal parameters and then for such that and , this property states that if we increase , or at least do not decrease , the level of the threshold in each margin , then the distribution of conditional excesses is still gp , with a new set of scale parameters , but retaining the same vector of shape parameters .a special role is played by the levels : these have the stability property that for any set it holds that , for , = { \mathbb{p}}[{\bm{x}}\in a ] / t,\ ] ] where .this follows from equation along with the representation of to be given in equation .the -th component of , , is the quantile of .equation provides one possible tool for checking if a multivariate gp is appropriate ; see section [ sec : diagnostics ] .[ [ sec : condmarg ] ] lower dimensional conditional margins .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + lower dimensional margins of gp distributions are typically not gp , as the conditioning event leading to the distribution involves all variables , and not only the variables in a lower dimensional margin .let , with , and similarly for other vectors .then does follow a gp distribution .combined with the threshold stability property above , we also have that if is such that and then follows a gp distribution . [ [ sec : chi ] ] constant conditional exceedances .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + [ [ sec : sum ] ] sum - stability under shape constraints + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + if follows a multivariate gp distribution , with scale parameter and shape parameter , then for weights such that with positive probability , we have that is , weighted sums of components of a multivariate gp distribution with equal shape parameters , conditioned to be positive , follow a univariate gp distribution with the same shape parameter and with the scale parameter equal to the weighted sum of the marginal scale parameters .the dependence structure of the gp distribution does not affect this result , but it does affect the probability of the conditioning event , i.e. , the probability that the sum of components is positive .further details can be found in .we focus on how to construct suitable densities for the random vector , which through equation , leads to densities for the multivariate gp distribution with marginal parameters and .let be a unit exponential random variable and let be a -dimensional random vector , independent of . define .then the random vector has the required properties to be a gp vector with support included in the set and with and ( interpreted as the limit for for all ) .moreover , _ every _ such gp vector can be expressed in this way .the probability of the -th component being positive is = { \mathbb{e } } [ e^{t_j - \max(\bm{t } ) } ] ] , i.e. , the probability that the -th component exceeds its corresponding threshold given that one of the components does .suppose has a density on . by theorem 5.1 of , the density of given by one way to construct models therefore is to assume different distributions for , which provide flexible forms for , and for which ideally the integral in can be evaluated analytically .one further construction of gp random vectors is given in .if is a -dimensional random vector with density and such that < \infty ] for .formulas and can be obtained from one another via a change of measure linking and . where and take the same form , then the similarity in integrals between and means that if one can be evaluated , then typically so can the other ; several instances of this are given in the models presented in section [ sec : examples ] .what is sometimes more challenging is calculation of the normalization constant = \int_0^\infty \{1-f_{\bm{u}}(\log t \bm{1})\ } \,{\ ,\mathrm{d}}t ] .the support for each gp density given in sections [ sec : indep ] and [ sec : mvn ] is . in section [ sec : structured ] we exhibit a construction of in with support depending on and . in all models , identifiability issues occur if or have unconstrained location parameters , or if has unconstrained scale parameters . indeed , replacing or by or , respectively , with and , lead to the same gp distribution ( * ? ? ?* proposition 1 ) . a single constraint , such as fixing the first parameter in the vector , is sufficient to restore identifiability .let be a random vector with independent components and density , so that , where are unspecified densities of real - valued random variables .the dependence structure of the associated gp distributions is determined by the relative heaviness of the tails of the marginal distributions : roughly speaking , if one component has a high probability of being `` large '' compared to the others , then the dependence is weaker than if all components have a high probability of taking similar values . throughout, is such that .let , \qquad \alpha_j>0 , \ , \beta_j\in\mathbb{r}.\ ] ] [ [ case - f_bmt - f_bmv . ] ] case .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + density is if then the integral can be explicitly evaluated : [ [ case - f_bmu - f_bmv . ] ] case .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the marginal expectation of the exponentiated variable is = \begin{cases } e^{\beta_j } \gamma(1 - 1/\alpha_j ) , & \alpha_j>1 , \\ \infty , & \alpha_j \leq 1 .\end{cases}\ ] ] for , density is if then this simplifies to : observe that if in addition to , we also have , then this is the multivariate gp distribution associated to the well - known _logistic _ max - stable distribution .let , \qquad \alpha_j>0,\,\beta_j\in\mathbb{r}.\ ] ] as the gumbel case leads to the multivariate gp distribution associated to the logistic max - stable distribution , the reverse gumbel leads to the multivariate gp distribution associated to the _ negative logistic _ max - stable distribution .calculations are very similar to the gumbel case , and hence omitted .let [ [ case - f_bmt - f_bmv.-1 ] ] case .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + density is [ [ case - f_bmu - f_bmv.-1 ] ] case .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the expectation of the exponentiated variable is = 1/ \left\{e^{\beta_j}(\alpha_j+1)\right\} ] is the component of with the same index as ( thus the } ] , hence finite for all permitted parameter values .density is } \prod_{j=1}^d \left ( \frac{e^{\alpha_j x_j}}{\gamma(\alpha_j ) } \right ) \int_0^\infty t^{\sum_{j=1}^d \alpha_j } e^{- t \sum_{j=1}^d e^{x_j } } { \ , \mathrm{d}}t \\ & = \frac{1}{{\mathbb{e}}[e^{\max(\bm{u } ) } ] } \frac{\gamma\left(\sum_{j=1}^d \alpha_j+1\right)}{\prod_{j=1}^d \gamma ( \alpha_j ) } \frac { e^{\sum_{j=1}^d \alpha_j x_j - \max(\bm{x})}}{(\sum_{j=1}^d e^{x_j})^{\sum_{j=1}^d \alpha_j+1}}. \ ] ] the normalization constant is & = \frac{\gamma \left ( \sum_{j=1}^d \alpha_j + 1\ \right)}{\prod_{j=1}^d \gamma ( \alpha_j ) } \int_{\delta_{d-1 } } \max ( u_1,\ldots , u_d ) \prod_{j=1}^d u_j^{\alpha_j - 1 } { \ , \mathrm{d}}u_1 \cdots { \ , \mathrm{d}}u_{d-1},\end{aligned}\ ] ] where ^d : u_1 + \cdots + u_d = 1\ } ] is finite for all permitted parameter values , where denotes the diagonal element of .density is } \int_{-\infty}^\infty \frac{(2\pi)^{-d/2}}{|\sigma|^{1/2}}\exp\left\{-\tfrac{1}{2}({\bm{x}}-\bm{\beta}-s \bm{1})^t\sigma^{-1}({\bm{x}}-\bm{\beta}-s \bm{1 } ) + s\right\ } { \ , \mathrm{d}}s\\ & = \frac{(2\pi)^{(1-d)/2}|\sigma|^{-1/2}}{{\mathbb{e}}[e^{\max(\bm{u})}](\bm{1}^t\sigma^{-1}\bm{1})^{1/2 } } \exp\left\{-\tfrac{1}{2}({\bm{x}}-\bm{\beta})^t a ( { \bm{x}}-\bm{\beta } ) + 2\frac{({\bm{x}}-\bm{\beta})^t\sigma^{-1}\bm{1}-1}{\bm{1}^t\sigma^{-1}\bm{1 } } \right\},\end{aligned}\ ] ] with as in . the distribution for this case is already known ; it is the gp distribution associated to the brown resnick or hsler reiss max - stable model .a variant of the density formula with = 1 ] , where is the zero - mean multivariate normal distribution function with covariance matrix .this normalization constant can also be expressed as a sum of multivariate normal distribution functions , see . observe that in high dimensions , the normalization constant for the density based on may be onerous to compute , whilst the density based on does not require this . in sections [ sec : indep ] and [ sec : mvn ] we considered several distributions for and .here we present a model for based on cumulative sums of exponential random variables .cumulative sums will lead to a vector whose components are ordered ; for the components of the corresponding gp vector to be ordered as well , we will assume that and .this model will be of interest in section [ sec : rainfall ] , where we will focus on modelling cumulative precipitation amounts which may trigger landslides .[ [ case - bmgamma - bm0 . ] ] case .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + by construction , the densities and coincide since .let be the random vector whose components are defined by where are the rate parameters , i.e. , = \exp \left(-\lambda_j x_j \right) ] ( see section [ sec : standard ] ) .thus one would compare ] = / { \mathbb{p}}[\bm{y}\not\leq\bm{u}] ] for .we find = 0.34 ] using the values from the top row in table [ tab : fit ] ; the standard errors are obtained using the delta method . for ,the empirical probabilities are and respectively .plots of the empirical probabilities for a range of thresholds ( not shown ) confirm the chosen threshold value . a more formal way to assess the goodness - of - fit of the dependence structure of this model is by calculating the test statistic presented in ( * ?* corollary 2.5 ) .the test statistic proposed there is based on the difference between the value of and an empirical estimator thereof .it depends on a sequence where and as , which represents the threshold value used : a low value of corresponds to a high threshold .the test statistic converges to a chi - square distribution with degrees of freedom ; its quantile is equal to . computing the test statistic for , where we set again , we find the values , , , , and respectively , so that we can not reject the structured components model for any value of .we have outlined several new models for multivariate gp distributions , along with their uncensored and censored likelihoods .the models were applied to two types of data : stock price returns , where advantages in terms of consideration of a portfolio were demonstrated ; and rainfall data , where the context dictated that extremes of ordered cumulative data were the object of interest . methods to select a multivariate threshold and diagnostics for the fitted models have been considered and demonstrated through the applications . the threshold selection method suggested in section [ sec : diagnostics ] is relatively conservative in the sense that a multivariate gp model could hold with lower thresholds in some margins .better ways to select a multivariate threshold , ideally incorporating marginal and dependence considerations simultaneously , remains an interesting problem .a key issue that was highlighted in section [ sec : extremaldependence ] was the idea of asymptotic ( in)dependence , and how we can detect when multivariate gp distributions form an appropriate model .in particular , we have not dealt with the situation where the gp distribution places mass on a lower dimensional subspace , with the models outlined in section [ sec : examples ] placing no mass on lower - dimensional subspaces of the -dimensional space . in principle , parameters assigning mass to such lower - dimensional spaces could be introduced and estimated through censored likelihood , although it seems likely that information on these parameters would be weak , and alternative ways of handling this situation are much needed .if certain subsets of variables display asymptotic dependence , then , depending on questions of interest , it may be worth considering these separately .however , if all pairwise , then no asymptotic dependence exists amongst any variables , and no multivariate gp model would be approporiate , since the limiting model places mass only on one - dimensional lines . in this context , methods from , or may prove useful .here we detail forms of censored likelihoods for the models detailed in section [ sec : models ] . for simplicitythey are presented in standardized ( , ) form , i.e. , } h({\bm{x}};\bm{1},\bm{0 } ) { \ , \mathrm{d}}{\bm{x}}_c , \label{eq : gpcens}\end{aligned}\ ] ] for and corresponding to either or . the generalized form of a censored likelihood is easily obtained from as the support for each density is , and we let denote the cardinality of the set .[ [ case - f_bmt - f_bmv.-4 ] ] case .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + if all are equal to : [ [ case - f_bmu - f_bmv.-4 ] ] case .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + if all are equal to : [ [ case - f_bmt - f_bmv.-5 ] ] case .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + to evaluate this , consider two cases : ( i ) ; and ( ii ) let for and . in case ( i ) , we have since on the range the term in is equal to . in case ( ii )this term will vary over that range , and one needs to split the integral as follows : an evaluation of each integral yields that is equal to \bigg\}\\ & \quad \mbox { } + \frac{\prod_{j\in c_{(k ) } } e^{(v_j+\beta_j)/\alpha_j } \prod_{j\in { d\setminus c } } ( 1/\alpha_j ) e^{(x_j+\beta_j)/\alpha_j } } { \sum_{j \in c_{(k ) } } 1/\alpha_j + \sum_{j \in { d\setminus c } } 1/\alpha_j } \\ &\qquad\qquad \mbox { } \times \left [ \left ( e^{\max_{j\in{d\setminus c}}(x_j+\beta_j ) } \right)^{-\sum_{j \in c_{(k ) } } 1/\alpha_j -\sum_{j \in { d\setminus c } } 1/\alpha_j}-\left(e^{v_{(k)}+\beta_{(k)}}\right)^{-\sum_{j \in c_{(k ) } } 1/\alpha_j -\sum_{j \in { d\setminus c } } 1/\alpha_j } \right]\end{aligned}\ ] ] with , i.e. , with the indices corresponding to the largest removed .[ [ case - f_bmu - f_bmv.-5 ] ] case .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + is found similarly by noting the relation between these two approaches .[ [ case - f_bmt - f_bmv.-6 ] ] case .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + let denote the cumulative distribution function of a gamma random variable. then [ [ case - f_bmu - f_bmv.-6 ] ] case . + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + defining , we have [ [ case - f_bmt - f_bmv.-7 ] ] case .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + for the gaussian model , using abbreviated notation , the key observation is } h({\bm{x } } ) { \ , \mathrm{d}}{\bm{x}}_c = h_{{d\setminus c}}({\bm{x}}_{{d\setminus c } } ) \int_{\times_{j\in c}(-\infty , v_j ] } \frac{h({\bm{x}})}{h_{{d\setminus c}}({\bm{x}}_{{d\setminus c } } ) } { \ , \mathrm{d}}{\bm{x}}_c , \label{eq : loggaussint}\end{aligned}\ ] ] and the ratio in the second integral can be written as a proper gaussian density function ( with parameters that depend on ) .the integrand is \right\}\end{gathered}\ ] ] with firstly note that as the maximum will not occur among the censored components . by a completion of the square it can be shown that expression is in fact equal to with and where ( respectively ) is a [ respectively matrix of 0s with 1s in the position , for the index in and , ( similarly for ) .therefore equation resolves as with the cdf of a -variate multivariate gaussian distribution with location vector and covariance matrix .[ [ case - f_bmu - f_bmv.-7 ] ] case .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + again this can be found similarly to the above noting the relation between these two forms ; see also .recall that since this is a model on the random vector , we need to differentiate between and . we present the case only , since the case is very similar .moreover , we set as in section [ sec : rainfall ] .[ [ case - bmgamma - bm0.-2 ] ] case .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the censored likelihood has an analytical expression but is tedious to write down .note that , since the density is non - zero only for , we censor in components if .if , then for and , where .expressions for follow naturally by repeated integration of the above result .pointwise confidence intervals are obtained by a transformation of the beta distributed order statistics of a uniform distribution.,scaledwidth=99.0% ] pointwise confidence intervals are obtained by a transformation of the beta distributed order statistics of a uniform distribution ., title="fig:",scaledwidth=35.0% ] pointwise confidence intervals are obtained by a transformation of the beta distributed order statistics of a uniform distribution ., title="fig:",scaledwidth=35.0% ] + for the three - dimensional structured components model fitted in section 6.2 , the dependence measures , and are authors gratefully acknowledge the financial support from the following agencies and projects : the knut and alice wallenberg foundation ( a. kiriliouk , h. rootzn , and j. wadsworth ) , the `` fonds de la recherche scientifique - fnrs '' ( a. kiriliouk ) the contract `` projet dactions de recherche concertes '' no .12/17 - 045 of the `` communaut franaise de belgique '' ( a. kiriliouk and j. segers ) , iap research network grant p7/06 of the belgian government ( j. segers ) , and epsrc fellowship grant ep / p002838/1 ( j. wadsworth ) .we thank the abisko scientific research station for providing access to their rainfall data , which we used in section [ sec : rainfall ] .these data may be obtained by following the instructions at http://polar.se/en/abisko-naturvetenskapliga-station/vaderdat .data for section [ sec : banks ] can be downloaded from http://finance.yahoo.com .programs for the analyses in sections [ sec : banks ] and [ sec : rainfall ] are available from the authors on request .
|
the multivariate generalized pareto distribution arises as the limit of a suitably normalized vector conditioned upon at least one component of that vector being extreme . statistical modelling using multivariate generalized pareto distributions constitutes the multivariate analogue of peaks over thresholds modelling with the univariate generalized pareto distribution . we introduce a construction device which allows us to develop a variety of new and existing parametric tail dependence models . a censored likelihood procedure is proposed to make inference on these models , together with a threshold selection procedure and several goodness - of - fit diagnostics . we illustrate our methods on two data applications , one concerning the financial risk stemming from the stock prices of four large banks in the united kingdom , and one aiming at estimating the yearly probability of a rainfall which could cause a landslide in northern sweden .
|
topological quantum error - correcting codes ( tqecc ) are defined on two - dimensional lattices of qubits with geometrically local parity checks .thus , they are a form of quantum ldpc codes with an additional locality requirement imposed to their tanner graph .to appreciate the importance of this feature , recall that in quantum mechanics , measuring a qubit alters its state . to detect errors ,it is not possible to measure each qubit separately and verify that they satisfy all the check conditions like it is done classically without destroying the encoded information . instead, it is necessary to perform a collective measurement on all the qubits involved in a given check , which requires having the qubits physically interact with each other , or with a mediator system .thus , having local checks is an extremely important feature that explains together with the possibility of implementing some gates topologically the growing interest in topological quantum codes .the prominent example of tqecc is kitaev s toric code family that we define below . for these codes ,defined on a toric qubit lattice , errors in the same homology class have the same effect on the encoded information .thus , maximum - likelihood decoding consists in identifying the lowest weight homology class of equivalent errors .previously , a decoding algorithm based on perfect matching was proposed which identifies the lowest weight error , ignoring the equivalence relation set by homology .the complexity of this algorithm is quite prohibitive , where is the linear size of the lattice .other topological codes had no known efficient decoding algorithm . in presented a new decoding algorithm for tqecc .the essential idea of this algorithm borrows from the renormalization group method of statistical physics .intuitively , we can think of a tqecc on a lattice of linear size as consisting of levels of concatenation of a tqecc on a lattice of linear size 2 .concatenated quantum codes can be decoded efficiently by a recursive algorithm . starting from an error model characterizing the channel , each lattice is soft - decoded , producing an effective renormalized " error model on its logical qubits .this error model is passed to the next level of concatenation , and we recurse .the recursion ends after iterations , where it outputs a probability vector describing the encoded information .each round involves decoding at most constant size tqecc , so the overall complexity is and can easily be parallelized for a total runtime of .because tqecc are not truly concatenated codes , the intuition explained above can not be turned into a rigorous method , and some approximations are necessary . in this paper , we give a detailed presentation of the approximation techniques used in ref . and present some results obtained from our method .the state of a collection of qubits can be specified by a vector in the complex hilbert space .each vector space in this tensor decomposition is associated to a qubit .a code on qubits is a subspace of . for kitaev s code like all stabilizer codes this subspace is specified by a set of mutually commuting operators that play a role similar to the rows of a parity - check matrix . to define these operators , it is convenient to display the qubits on a regular square lattice with periodic boundary conditions , i.e. with the topology of a torus .there is one qubit on each _ edge _ of the lattice , for a total of qubits for a lattice .for each site ( vertex ) of the lattice , we define a site operator and for each plaquette ( site of the dual lattice ) , we define a plaquette operator .these definitions use the pauli matrices and we use to denote the pauli operator acting on the qubit located on edge , i.e. where appears at position and denotes the identity matrix . the notation denotes the set of edges adjacent to site , and similarly for . because , the pauli operators and , form a group under multiplication , the pauli group of qubits .every element in this group squares to the identity ( modulo a phase , that we henceforth omit ) .these generators obey canonical commutation relations : all pairs of generators commute , except and that anti - commute .it follows from these relations that the and all mutually commute .the commutation of the among themselves is trivial since they are all made up of matrices , and similarly for the commutation of the among themselves .the commutation of a with a follows from the fact that a site and a plaquette either have no common edge , or they have two .both cases imply an _ even _ number of anti - commuting operators , so they commute .the code is now defined as the subspace this is an eigenvalue equation , and because the and mutually commute , they indeed share common eigenvectors . in this definition of the code , the operators and play a role analogous to the rows of a parity check matrix in a classical linear code in the sense that they impose ( local ) constraints on the codewords .we will consider errors from the pauli group .in particular , we will be interested in the _ depolarizing channel_the natural generalization of the binary symmetric channel to the quantum setting for which each qubit is left unchanged with probability , or is affected by , or each with probability . in other words , all errors from the -qubit pauli group ( modulo phase ) are permitted , and the probability of a given error is where the weight of , denoted , is the number of tensor factors on which it differs from the identity ( generalizing the hamming weight ) .when an error occurs on a code state , the resulting state will in general no longer be a eigenstate of the star and plaquette operators : is a eigenstate of when and commute , and it is a eigenstate when they anti - commute , and similarly for . by measuring each star and plaquette operators ,we obtain a list of and forming the error syndrome . given these , error correction can proceed by identifying the most likely error compatible with the syndrome , where , and applying this error again to correct it ( since ) . as we will now explain ,finding the most likely error is not the optimal decoding strategy in quantum mechanics .the property that and leave each code state invariant , c.f . ,is inherited by the abelian group generated by them , called the _ stabilizer group _ .this induces an equivalence relation between operators on . if two operators and differ by an element of , for instance if , then they will have an identical effect on by definition .notice that the plaquette operators are elementary loops of on the lattice , so their products generate the group of homologically trivial loops on the torus , see .likewise , the star operators generate the group of homologically trivial loops of on the dual lattice .we conclude that two operators and have the same effect on the code if contains only homologically trivial loops , in which case we say that and are homologically equivalent .operator on qubit associated to the edge , and blue line indicates a .all -type operators are strings on the lattice while -type operators are strings on the dual lattice .trivial loops on the a ) lattice and b ) dual lattice corresponding to and respectively .loops with non - trivial homology c ) on the lattice and d ) on the dual lattice corresponding to the 4 generators of .e ) a trivial loop obtained by product of elementary trivial loops . ]on the other hand , there are 4 independent operators that map to itself ( i.e. commute with all star and plaquette operators ) but do not belong to ; they correspond to the homologically non - trivial loops of and around the hole and the body of the torus , see .they generate a group with operators called the _ logical pauli operators _ , and are associated with two encoded qubits . because and are equivalent for , the choice of the 4 generators of depicted on is to some extent arbitrary , only the homology class of the operators matters .thus optimal decoding consists in identifying the most likely class of homologically equivalent errors , in other words the most likely logical operator where is any reference error compatible with the error syndrome .error correction is completed by applying .although is expressed in terms of a specific choice of generators of and a specific reference error that are not topological invariants , the sum over makes the homology class of the correction operator independent of these choices .the optimal recovery scheme described above is in general very hard to achieve computationally . in particular , summing over the entire group of loops is a formidable task . to circumvent this difficulty , we will attempt to divide and conquer " using an approach inspired by the renormalization group method of statistical mechanics .we break the lattice into overlapping unit cells " as illustrated in .each of these cells contains 12 edges , and hence 12 qubits .the choice of this unit cell is somewhat arbitrary , but we will stick to this particular example for concreteness .this cell encloses 6 stabilizer generators in total , three plaquettes and three sites ( shown on the first two rows of ) .we can use these stabilizers to define a ( small ) error - correcting code a surface code , open boundary version of kitaev s toric code . to describe the algorithm ,it is convenient to choose a set of generators for the pauli group on the lattice .our choice is defined in .note that these operators obey canonical commutation relations , the same as the one of and : any two of these operators commute except for the two illustrated on the same unit cell that anti - commute .we also group these operators into three categories that play different roles in our algorithm , as we now explain .the stabilizer generators and play the same double role as explained in the previous section : they are measured to read - out the error syndrome , and they induce an equivalence relation between operators .they generate the stabilizer group containing elements .their conjugate partners and , that we call pure errors " , are used to construct the reference error appearing in in a systematic way .indeed , it follows from the canonical commutation relations that the operator is an error with syndrome .the four logical operators are representative of the homologically non - trivial loops ( although they do nt look like loops , they are strings with no ends on the lattice , which is the definition of a loop ) . they generate the logical group containing elements . in our recursive decoding algorithm, these four generators will be mapped onto single qubit operators on a renormalized lattice of half the linear size .hence , our goal is to assign probabilities to these logical operators , that will serve as an effective channel for the following recursion .finally , the four pairs of edge operators are needed to complete the set of generators , and correspond roughly ( but see below ) to qubits that are shared between neighboring unit cells .they generate the edge group containing elements .thus , they will be used to glue " neighboring unit cells into a renormalized unit cell used in the following recursion of the algorithm . with these definitions in place, we can describe the elementary step of our renormalization procedure on a given cell , which consists in computing a conditional probability distribution .we are given a list of syndromes associated to the six stabilizer generators on the unit cell .we collectively denote these syndromes by .we are also given a probability of errors contained on the cell .in the first step of the recursion , this probability is set by the channel , e.g. , and in the step it is given by the output of step . from these ,we compute the joint probability on the logical and edge group elements conditioned on the syndrome where is the reference error defined at , and is a normalization factor . note that this equation is very similar to the definition of the maximum - likelihood decoder , except that we must now include edge operators and we do not commit to a hard decision but instead keep the entire probability vector .this procedure is repeated for all the unit cells of the lattice ( this can be done in parallel , so in a constant amount of time ) .we will be using different marginals of this probability .for each cell , we can define the marginal probability on the logical and edge operators by and respectively .notice that , up to multiplication by and , the edge operators are supported only on those qubits that are shared between neighboring unit cells . for instance , the product is the operator acting on the bottom shared qubit , see .moreover , because the pure error component of the error is determined by , c.f . , we can directly interpret the conditional probability on the edge operators as a probability distribution for pauli operators on the shared qubits .this feature will be important when we describe how we glue unit cells together using belief propagation .each of these marginal probabilities can be broken down even further .remember that the logical operators are generated by two pairs of canonically conjugated operators and , , representing the two logical qubits , c.f . .thus , every operator in can be written as with .we can consider the marginal on one of the two logical qubits by summing over the value of the other variable , e.g. .likewise , the edge operators are generated by four canonical pairs of edge operators and , each encoding an edge qubit , c.f . .we can express any edge operator as with , and consider marginals such as .following the discussion above , these marginal probabilities can be directly interpreted as probability of the pauli operators acting on the shared qubits .for instance , is the error probability of the bottom shared qubit conditioned on the error syndrome of the unit cell . for this larger unit cellis obtained by combining the underlying logical operator distributions conditioned on the error syndromes of each cell .when both logical qubits of a unit cell participate in the construction of the larger cell , the induced error model on these qubits is correlated as illustrated on the figure . when only one logical operator participates in the construction , we consider the marginal distribution on that logical qubit . ] to complete one step of the renormalization procedure , we join the logical operators from 8 bare " unit cells into a larger renormalized unit cell as shown on . in a first approximation, we can use the probabilities from each bare unit cell to assign an effective error model to the renormalized unit cell .note that the conditional probability defined above is a joint probability distribution on two logical qubits .as a consequence , the renormalized error model can have correlated errors between neighboring qubits .this is not a problem when the two qubits appear in the same renormalized unit cell .we can take these correlations into account in our definition of the renormalized error model .however , when these two correlated qubits belong to two distinct renormalized unit cells , it is not possible to keep track of these correlations efficiently . in those cases ,we use the appropriate marginal distributions on each qubit to define the renormalized error model , see .in other words , we replace by .ignoring some of the correlations between the renormalized qubits is an approximation used to make our scheme efficient .the same procedure can now be executed on each renormalized unit cell .we can compute the various probabilities conditioned on the error syndromes contained in each cell . note that the renormalized star and plaquette operators each act on 8 bare qubits , spoiling the locality feature of kitaev s toric code . however , this renormalization of the star and plaquette operators is only for the purpose of presenting the decoding algorithm .the error syndrome associated to the renormalized stabilizer generators can be obtained by measuring the original 4-qubit star and plaquette operators and multiplying their outcomes as shown in . at the last iteration of this renormalization procedure, we obtain a probability distribution over the logical operators of the encoded qubits , completing the soft decoding procedure .the operator with the largest probability can be selected to implement the correction .associated to a renormalized plaquette operator is equal to the binary product of the four smaller plaquette operators contained in it .this follows from the fact that all these operators commute , and the product of the four operators is equal to the renormalized operator .the same holds for the star operators . ]the unit cells used by the renormalization decoding algorithm overlap in the sense that some qubits are shared between two unit cells . without these overlaps , each unit cell would contain only 2 complete stabilizer generators ( and , see ) instead of 6 . as a consequence ,the number of variables of the renormalized error model would increase by a constant factor at each renormalization step , leading to an exponential blowup .thus , these shared qubits appear to be necessary . on the other hand , the presence of shared qubits leads to the main approximation of our decoding scheme : a qubit that is shared between two unit cells is treated independently by each one of them as if it were two independent random variables .this can lead to some inconsistencies as illustrated in . as illustrated , and all the other error syndromes are . for the top cell ,three errors are equally likely to have caused this syndrome : an error on the qubit to the left , the bottom , or the right of the plaquette operator .similarly for the bottom cell , the two dominating errors that could have caused the syndrome are an error to the left and above the plaquette .but in fact , the bottom qubit of the top cell and the top qubit of the bottom cell are actually the same qubit ; a shared qubit .this qubit will be assigned a probability roughly of having an error by the top cell , while the bottom cell will assign the probability , which is inconsistent . on the other hand , because it is doubly suspicious , the probability of this qubit having suffered an error should be dominating , but decoding each cell independently fails to recognize this . ] to improve on this approximation , we allow unit cells to exchange messages . the purpose of these messages is to update the error model of the shared qubits or equivalently of the edge operators conditioned on the syndromes of the immediate neighboring cells . in short, each cell computes the marginal conditional probability of each of its shared qubits ( or edge operators , as explained above ) , and passes the probability associated to a given edge qubit to the cell sharing it .all unit cells can perform this in parallel .then , each cell reweighs the prior error model of its edge qubits by the incoming messages . iterating this procedure leads to a probability that is conditionned on the syndromes of an extended neighborhood .the procedure can be formalized as a belief propagation algorithm .let denote the marginal probability assigned to edge qubit ( obtained from the channel model or the output of the previous renormalization step ) . at round , each cell outputs messages , one for each of its shared qubit , that correspond to some probability vector on .at the first round , these messages are initialized to the uniform distribution .the cell s outgoing messages at time become its neighbors incoming messages at time : if cells and share qubit , .the message update rule is given by where notation such as denotes the stabilizer group generated by the 6 generators enclosed in , the notation stands for the set of edge qubits contained in , and finally denotes all the edge qubits in except .the messages roughly converge to steady distributions after a few iterations of this procedure ( we typically use 3 , since the graph contains 4-cycles ) .once convergence is reached , the renormalization algorithm is executed on each cell as explained in the previous section , but using the incoming messages to the cell to reweigh the prior probabilities on the shared qubits .we have assessed the performance of our decoder using monte carlo sampling .figure [ fig : results ] summarizes our results .it shows a clear threshold near the depolarizing probability , very close to what is achieved by the perfect matching decoder of .thus , we obtain an exponential gain in decoding time without significant performance loss .many modifications can be made to the basic decoding scheme presented here that allow tradeoffs between decoding complexity and error suppression .some of these extensions were presented in , and in particular they achieved a depolarizing threshold higher than the perfect matching algorithm .we have also adapted our decoder to other noise models , such as the erasure channel , and other topological codes , in particular the color codes of . _acknowledgements_we thank jim harrington , hctor bombn , and sergey bravyi for useful conversations .this work was partially funded by nserc , fqrnt , and mitacs .computational resources were provided by the rseau qubcois de calcul de haute performance ( rqchp ) and compute canada .
|
topological quantum error - correcting codes are defined by geometrically local checks on a two - dimensional lattice of quantum bits ( qubits ) , making them particularly well suited for fault - tolerant quantum information processing . here , we present a decoding algorithm for topological codes that is faster than previously known algorithms and applies to a wider class of topological codes . our algorithm makes use of two methods inspired from statistical physics : renormalization groups and mean - field approximations . first , the topological code is approximated by a concatenated block code that can be efficiently decoded . to improve this approximation , additional consistency conditions are imposed between the blocks , and are solved by a belief propagation algorithm .
|
optimization problems related to strings such as protein or dna sequences are very common in bioinformatics .examples include string consensus problems such as the far - from most string problem , the longest common subsequence problem and its variants , and alignment problems .these problems are often computationally very hard , if not even -hard . in this work we deal with the _ minimum common string partition _( mcsp ) problem , which can be described as follows .we are given two related input strings that have to be partitioned each into the same collection of substrings .the size of the collection is subject to minimization .a formal description of the problem will be provided in section [ sec : problem - description ] .the mcsp problem has applications , for example , in the bioinformatics field .chen et al . point out that the mcsp problem is closely related to the problem of sorting by reversals with duplicates , a key problem in genome rearrangement . in this paperwe introduce the first integer linear program ( ilp ) for solving the mcsp problem .an experimental evaluation on problem instances from the related literature shows that this ilp can be efficiently solved , for example , by using any version of ibm ilog cplex .however , a study on new instances of larger size demonstrates the limitations of the model .therefore , we additionally introduce a deterministic 2-phase heuristic which is strongly based on the original ilp .the experimental evaluation shows that the heuristic is applicable to larger problem instances than the original ilp .moreover , it is shown that the heuristic outperforms competitor algorithms from the related literature on known problem instances .the mcsp problem can technically be described as follows .given are two input strings and , both of length over a finite alphabet .these two strings are required to be _ related _ , which means that each letter appears the same number of times in each of them .note that this definition implies that and have the same length .a valid solution to the mcsp problem is obtained by partitioning into a set of non - overlapping substrings , and into a set of non - overlapping substrings , such that .moreover , we are interested in finding a valid solution such that is minimal .consider the following example .given are dna sequences and .obviously , and are related because * a * and appear twice in both input strings , while * c * and * t * appear once .a trivial valid solution can be obtained by partitioning both strings into substrings of length 1 , that is , .the objective function value of this solution is 6 .however , the optimal solution , with objective function value 3 , is .the mcsp problem has been introduced by chen et al . due to its relation to genome rearrangement .more specifically , it has applications in biological questions such as : may a given dna string possibly be obtained by rearrangements of another dna string ?the general problem has been shown to be -hard even in very restrictive cases .other papers concerning problem hardness consider , for example , the -mcsp problem , which is the version of the mcsp problem in which each letter occurs at most times in each input string .the 2-mcsp problem was shown to be apx - hard in . when the input strings are over an alphabet of size , the corresponding problem is denoted as mcsp .jiang et al . proved that the decision version of the mcsp problem is -complete when .the mcsp has been considered quite extensively by researchers dealing with the approximability of the problem .cormode and muthukrishnan , for example , proposed an -approximation for the _ edit distance with moves _ problem , which is a more general case of the mcsp problem .shapira and storer extended on this result .other approximation approaches for the mcsp problem have been proposed in . in this context ,chrobak et al . studied a simple greedy approach for the mcsp problem , showing that the approximation ratio concerning the 2-mcsp problem is 3 , and for the 4-mcsp problem the approximation ratio is . in the case of the general mcsp problem , the approximation ratio is between and , assuming that the input strings use an alphabet of size .later kaplan and shafir raised the lower bound to .kolman proposed a modified version of the simple greedy algorithm with an approximation ratio of for the -mcsp .recently , goldstein and lewenstein proposed a greedy algorithm for the mcsp problem that runs in time ( see ) .he introduced a greedy algorithm with the aim of obtaining better average results .damaschke was the first one to study the fixed - parameter tractability ( fpt ) of the problem .later , jiang et al . showed that both the -mcsp and mcsp problems admit fpt algorithms when and are constant parameters .finally , fu et al . proposed a time algorithm for the general case and an time algorithm applicable under some constraints . to our knowledge ,the only metaheuristic approaches that have been proposed in the related literature for the mcsp problem are ( 1 ) the - ant system by ferdous and sohel and ( 2 ) the probabilistic tree search algorithm by blum et al .both works applied their algorithm to a range of artificial and real dna instances from .the remaining part of the paper is organized as follows . in section [ sec : mip ] , the ilp model for solving the mcsp is outlined . moreover ,an experimental evaluation is provided .the deterministic heuristic , together with an experimental evaluation , is described in section [ sec : heuristic ] .finally , in section [ sec : conclusions ] we provide conclusions and an outlook to future work .in the following we present the first ilp model for solving the mcsp . for this ,the definitions provided in the following are required . note that an illustrative example is provided in section [ sec : example ] . henceforth , a _ common block _ of input strings and is denoted as a triple where is a string which can be found starting at position in string and starting at position in string .moreover , let be the ( ordered ) set of all possible common blocks of and .is ordered is of no importance . ] given the definition of , any valid solution to the mcsp problem is a subset of is , that : 1 . , that is , the sum of the length of the strings corresponding to the common blocks in is equal to the length of the input strings . 2 .for any two common blocks it holds that their corresponding strings neither overlap in nor in . moreover , any ( valid ) partial solution is a subset of fulfilling the following conditions : ( 1 ) and ( 2 ) for any two common blocks it holds that their corresponding strings neither overlap in nor in .note that any valid partial solution can be extended to be a valid solution .furthermore , given a partial solution , set denotes the set of common blocks that may be used in order to extend such that the result is again a valid ( partial ) solution .first , two binary matrices and are defined as follows . in both matrices, row corresponds to common block .moreover , a column corresponds to position in input string , respectively . in general , the entries of matrix are set to zero .however , in each row , the positions that string ( of common block ) occupies in input string are set to one . correspondingly , the entries of matrix are set to zero , apart from the fact that in each row the positions occupied by string in input string are set to one .henceforth , the position of a matrix is denoted by .finally , we introduce for each common block a binary variable . with these definitionswe can express the mcsp in form of the following integer linear program , henceforth referred to by .i=1^m x_i + & + _i=1^m |t_i| x_i & = n [ eqn : const1 ] + _i=1^m m1_i , j x_i & = 1 j=1, ,n [ eqn : const2 ] + _i=1^m m2_i , j x_i & = 1 j=1, ,n[ eqn : const3 ] + x_i & \{0 , 1 } i=1, ,m hereby , the objective function minimizes the number of selected common blocks .constraint ( [ eqn : const1 ] ) ensures that the sum of the length of the strings corresponding to the selected common blocks is equal to .finally , constraints ( [ eqn : const2 ] ) make sure that the strings corresponding to the selected common blocks do not overlap in input string , while constraints ( [ eqn : const3 ] ) make sure that the strings corresponding to the selected common blocks do not overlap in input string . as an example , consider the small problem instance from section [ sec : problem - description ] .the complete set of common blocks ( ) as induced by input strings and is as follows : b_1 = \\ b_2 = \\ b_3 = \\ b_4 = \\ b_5 = \\ b_6 = \\ b_7 = \\ b_8 = \\ b_9 = \\b_{10 } = \\b_{11 } = \\b_{12 } = \\ b_{13 } = \\ b_{14 } = \end{bmatrix * } \nonumber\ ] ] given set , matrices and are the following ones : [ cols="^,^,^ " , ] the optimal solution to this instance is .it can easily be verified that this solution respects constraints ( 2 - 4 ) of the ilp model . in the followingwe will provide an experimental evaluation of model .the model was implemented in ansi c++ using gcc 4.7.3 for compiling the software .moreover , the model was solved with ibm ilog cplex v12.1 .the experimental results that we outline in the following were obtained on a cluster of pcs with `` intel(r ) xeon(r ) cpu 5130 '' cpus of 4 nuclei of 2000 mhz and 4 gigabyte of ram . for testing model chose the same set of benchmark instances that was used by ferdous and sohel in for the experimental evaluation of their ant colony optimization approach .this set contains , in total , 30 artificial instances and 15 real - life instances consisting of dna sequences .remember , in this context , that each problem instance consists of two related input strings .moreover , the benchmark set consists of four subsets of instances .the first subset ( henceforth labelled group1 ) consists of 10 artificial instances in which the input strings are maximally of length 200 .the second set ( group2 ) consists of 10 artificial instances with input string lengths between 201 and 400 . in the third set ( group3 )the input strings of the 10 artificial instances have lengths between 401 and 600 . finally , the fourth set ( real ) consists of 15 real - life instances of various lengths .the results are shown in tables [ tab : results : group1]-[tab : results : real ] , in terms of one table per instance set .the structure of these tables is as follows .the first column provides the instance identifiers .the second column contains the results of the greedy algorithm from ( results were taken from ) .the third column provides the value of the best solution found in four independent runs per problem instance ( with a cpu time limit of 7200 seconds per run ) by the aco approach by ferdous and sohel .the fourth column provides the value of the best solution found in 10 independent runs per problem instance ( with a cpu time limit of 1000 seconds per run ) by the probabilistic tree search algorithm ( henceforth labelled tresea ) by blum et al .tresea was run on the same machines as the ones used for the current work .finally , the last four table columns are dedicated to the presentation of the results provided by solving model .the first one of these columns provides the value of the best solution found within 3600 cpu seconds . in casethe optimality of the corresponding solution was proved by cplex , the value is marked by an asterix . the second column dedicated to provides the computation time ( in seconds ) . in case of having solved the corresponding problem to optimality , this column only displays one value indicating the time needed by cplex to solve the problem .otherwise , this column provides two values in the form x / y , where x corresponds to the time at which cplex was able to find the first valid solution , and y corresponds to the time at which cplex found the best solution within 3600 cpu seconds .the third one of the columns dedicated to shows the optimality gap , which refers to the gap between the value of the best valid solution and the current lower bound at the time of stopping a run .finally , the last column indicates the size of set , that is , the size of the complete set of common blocks .note that this value corresponds to the number of variables used by .the best result ( among all algorithms ) for each problem instance is marked by a grey background , and the last row of each table provides averages over the whole table .+ the following conclusions can be drawn when analyzing the results .first , cplex is able to solve all instances of group1 to optimality .this is done , on average , in about 13 seconds .moreover , none of the existing algorithms was able to find any of these optimal solutions .second , cplex was also able to find new best - known solutions for all remaining 35 problem instances , even though it was not able to prove optimality within 3600 cpu seconds , which is indicated by the positive optimality gaps .an exception is instance 1 of set real which also could be solved to optimality .third , the improvements over the competitor algorithms obtained by solving with cplex are remarkable . in particular , the average improvement ( in percent ) over tresea ,the best competitor from the literature , is in the case of group1 , in the case of group2 , in the case of group3 , and in the case of real . + in order to study the limits of solving with cplex we randomly generated larger dna instances . in particular, we generated one random instance for each input string size from .cplex was stopped when at least 3600 cpu seconds had passed and at least one feasible solution had been found .however , if after 12 cpu hours still no feasible solution was found , the execution was stopped as well .the results are shown in table [ tab : results : mip : large ] .the first column of this table provides the length of the corresponding random instance .the remaining four columns contain the same information as already explained in the context of tables [ tab : results : group1]-[tab : results : real ] , just that column * time ( s ) * simply provides the computation time ( in seconds ) at which the best solution was found .analyzing the results we can observe that the application of cplex to quickly becomes unpractical with growing input string size .for example , the first valid solution for the instance with string length 1400 was found after 20616 seconds . concerning the largest problem instance ,no valid solution was found within 12 cpu hours .as shown at the end of the previous section , the application of cplex to reaches its limits starting from an input string size of about 1200 .however , if it were possible to considerably reduce the size of the set of common blocks ( ) , mathematical programming might still be an option to obtain good ( heuristic ) solutions .with this idea in mind we studied the distribution of the lengths of the strings of the common blocks in for all 45 problem instances .this distribution is shown averaged over the instances of each of the four instance sets in figure [ fig : idea : heuristic ] . analyzing these distributionsit can be observed , first of all , that the distribution does not seem to depend on instance size .however , the important aspect to observe is that around of all the common blocks contain strings of length .moreover , only a very small portion of these common blocks will form part of an optimal solution . in comparison , it is reasonable to assume that a much larger percentage of the blocks corresponding to large strings will form part of an optimal solution .these observations gave rise to the heuristic which is outlined in the following .the proposed heuristic works in two phases . in the first phase , a subset of ( the complete set of common blocks ) must be chosen .for this purpose , let ( where ) denote the subset of that contains all common blocks from with , that is , all blocks whose corresponding string is longer or equal than . note , in this context , that .moreover , note that .let be the smallest value for such that .observe that only contains the common blocks with the longest strings .having chosen a specific value for from ] .in fact , we applied the heuristic to each of the 45 problem instances from sets group1 , group2 , group3 , and real , with all possible values for . in order not to spend too much computation time the following stopping criterionwas used for each call to cplex concerning any of the two involved ilp models .cplex was stopped ( 1 ) in case a provenly optimal solution was obtained or ( 2 ) in case at least 50 cpu seconds were spent and the first valid solution was obtained .the overall result of the heuristic for a specific problem instance is the value of the best solution found for any value of .moreover , as computation time we provide the sum of the computation times spend for all applications for different value of .+ the results are shown in table [ tab : results : heuristic ] , which contains one subtable for each of the four instance sets .each subtable has the following format .the first column provides the instance identifier .the second column contains the value of the best solution found in the literature .finally , the last two table columns present the results of our heuristic .the first one of these columns contains the value of the best solution generated by the heuristic , while the second column provides the total computation time ( in seconds ) .the last row of each subtable presents averages over the whole subtable .moreover , the best result for each instance is marked by a grey background , and those cases in which the result of applying cplex to could be matched are marked by a `` + '' symbol .the results allow to make the following observations .first , our heuristic is able to improve the best - known result from the literature in 37 out of 46 cases . in further six casesthe best - known results from the literature are matched .finally , in two remaining cases the heuristic is not able to produce a solution that is at least as good as the best - known solution known from the literature .overall , the heuristic improves by ( on average ) over the best known results from the literature . on the downside ,the heuristic is only able to match the results of applying cplex to model in three out of 45 cases .however , this changes with growing instance size , as we will show later in section [ sec : large ] . with the aim of gaining more insight into the behavior of the heuristic with respect to the choice of a value for parameter , the following information is presented in graphical form in figure [ fig : results : heuristic ] .two graphics are shown for each of the four chosen problem instances .more precisely , we chose to present information for the largest problem instances from each of the four instance sets ( see subfigures ( a ) to ( d ) of figure [ fig : results : heuristic ] ) .the left graphic of each subfigure has to be read as follows .the -axis ranges over the possible values for , while the -axis indicates the size of the set of common blocks that is used for solving models and .the graphic shows two curves . the one with a black line concerns solving model in phase 1 of the heuristic , while the other one ( shown by means of a grey line ) concerns solving model in phase two of the heuristic .the dots indicate for each value of the size of the set of common blocks used by the corresponding models .moreover , in case the interior of a dot is light - grey ( yellow in the online version ) this means that the corresponding model could not be solved to optimality within 50 cpu seconds , while a black interior of a dot indicates that the corresponding model was solved to optimality . finally , the bars in the background of the graphic present the values of the solutions that were generated with different values of .the graphics on the right hand side present the corresponding computation times required by solving the different models .+ the following observations can be made . when the value of is close to the lower or the upper bound that is , either close to 2 or close to of the two involved sets of common blocks is quite large , and , therefore , the computation time needed for solving the corresponding ilp may be large , in particular when the input instance is rather large . on the contrary , for intermediate values of ,the size of both involved sets of common blocks is moderate , and , therefore , cplex is rather fast in providing solutions , even if the optimal solution is not found ( or can not be proven ) within 50 cpu seconds .moreover , the best results are usually obtained for intermediate values of .this is with the exception of instance 10 of group1 , which might be an anomaly caused by the rather small size of the problem instance .based on the findings of the previous subsection the heuristic was applied with an intermediate value of to all problem instances from the set of larger instances described at the end of section [ sec : mip : results ] .the results are shown in table [ tab : results : heuristic : large ] .the first table column provides the length of the input strings of the corresponding random instance .the second column indicates the result of applying cplex with a computation time limit of 3600 cpu seconds to .were described in detail in section [ sec : mip : results ] . ]the remaining five columns contain the results of heuristic .the first one of these columns provides the value of the solution generated by the heuristic , while the second column shows the corresponding computation time .the next two columns provide the size of the sets of common blocks used in phase 1 , respectively phase 2 , of the heuristic .finally , the last column gives information about the number of common blocks considered by the heuristic in comparison to the size of the complete set of common blocks ( which can be found in table [ tab : results : mip : large ] ) .in particular , summing the common block set sizes from phases 1 and 2 of the heuristic and comparing this number with the size of the complete set of common blocks , the percentage of the common blocks considered by the heuristic can easily be calculated .this percentage is given in the last table column .as in all tables of this paper , the best result per table row is marked by a grey background .the following observations can be made .first , apart from the smallest problem instance , the heuristic outperforms the application of cplex to model .moreover , this is achieved in a fraction of the time needed by cplex .finally , it is reasonable to assume that the success of the heuristic is due to an important reduction of the common blocks that are considered ( see last table column ) . in general, the heuristic only considers between and of all common blocks . this is why the computation times are rather low in comparison to cplex .in this paper we considered a problem with applications in bioinformatics known as the minimum common string partition problem .first , we introduced an integer linear programming model for this problem . by applying the ibm ilog cplex solver to this model we were able to improve all best - known solutions from the literature for a problem instance set consisting of 45 instances of different sizes .the smallest ones of these problem instances could even be solved to optimality in very short computation time .the second contribution of the paper concerned a 2-phase heuristic which is strongly based on the developed integer linear programming model .the results have shown that , first , the heuristic outperforms competitor algorithms from the literature , and second , that it is applicable to larger problem instances .concerning future work , we aim at studying the incorporation of mathematical programming strategies based on the introduced integer linear programming model into metaheuristic techniques such as grasp and iterated greedy algorithms .moreover , we aim at identifying other string - based optimization problems for which a 2-phase strategy such as the one introduced in this paper might work well . c. blum was supported by project tin2012 - 37930 of the spanish government .in addition , support is acknowledged from ikerbasque ( basque foundation for science ) .j. a. lozano was partially supported by the saiotek and it609 - 13 programs ( basque government ) , tin2010 - 14931 ( spanish ministry of science and innovation ) , combiomed network in computational bio - medicine ( carlos iii health institute ) c. blum , j. a. lozano , and p. pinacho davidson .iterative probabilistic tree search for the minimum common string partition problem . in m.j. blesa , c. blum , and s. voss , editors , _ proceedings of hm 20104 9th international workshop on hybrid metaheuristics_ , lecture notes in computer science .springer verlag , berlin , germany , 2014 . in press .x. chen , j. zheng , z. fu , p. nan , y. zhong , s. lonardi , and t. jiang .computing the assignment of orthologous genes via genome rearrangement . in _ proceedings of the asia pacific bioinformatics conference 2005 _ , pages 363378 , 2005 .m. chrobak , p. kolman , and j. sgall .the greedy algorithm for the minimum common string partition problem .in k. jansen , s. khanna , j. d. p. rolim , and d ron , editors , _ proceedings of approx 2004 7th international workshop on approximation algorithms for combinatorial optimization problems _ , volume 3122 of _ lecture notes in computer science _ , pages 8495 .springer berlin heidelberg , 2004 .p. damaschke . minimum common string partition parameterized . in k.a. crandall and j. lagergren , editors , _ proceedings of wabi 2008 8th international workshop on algorithms in bioinformatics _ , volume 5251 of _ lecture notes in computer science _ , pages 8798 .springer berlin heidelberg , 2008 .s. m. ferdous and m. s. rahman .solving the minimum common string partition problem with the help of ants . in y. tan ,y. shi , and h. mo , editors , _ proceedings of icsi 2013 4th international conference on advances in swarm intelligence _ , volume 7928 of _ lecture notes in computer science _ , pages 306313 .springer berlin heidelberg , 2013 .b. fu , h. jiang , b. yang , and b. zhu .exponential and polynomial time algorithms for the minimum common string partition problem . in w.wang , x. zhu , and d .- z .du , editors , _ proceedings of cocoa 2011 5th international conference on combinatorial optimization and applications _ , volume 6831 of _ lecture notes in computer science _ , pages 299310 .springer berlin heidelberg , 2011 .a. goldstein , p. kolman , and j. zheng .minimum common string partition problem : hardness and approximations . in r.fleischer and g. trippen , editors , _ proceedings of isaac 2004 15th international symposium on algorithms and computation _ , volume 3341 of _ lecture notes in computer science _ , pages 484495 .springer berlin heidelberg , 2005 .i. goldstein and m. lewenstein .quick greedy computation for minimum common string partitions . in r.giancarlo and g. manzini , editors , _ proceedings of cpm 2011 22nd annual symposium on combinatorial pattern matching _ , volume 6661 of _ lecture notes in computer science _ , pages 273284 .springer berlin heidelberg , 2011 .a novel greedy algorithm for the minimum common string partition problem . in i.mandoiu and a. zelikovsky , editors , _ proceedings of isbra 2007 third international symposium on bioinformatics research and applications _ , volume 4463 of _ lecture notes in computer science _ , pages 441452 .springer berlin heidelberg , 2007 .p. kolman .approximating reversal distance for strings with bounded number of duplicates .in j. jedrzejowicz and a. szepietowski , editors , _ proceedings of mfcs 2005 30th international symposium on mathematical foundations of computer science _ ,volume 3618 of _ lecture notes in computer science _ , pages 580590 .springer berlin heidelberg , 2005 .p. kolman and t. wale .reversal distance for strings with duplicates : linear time approximation using hitting set . in t.erlebach and c. kaklamanis , editors , _ proceedings of waoa 2007 4th international workshop on approximation and online algorithms _ , volume 4368 of _ lecture notes in computer science _ , pages 279289 .springer berlin heidelberg , 2007 .d. shapira and j. a. storer .edit distance with move operations . in a.apostolico and m. takeda , editors , _ proceedings of cpm 2002 13th annual symposium on combinatorial pattern matching _, volume 2373 of _ lecture notes in computer science _ , pages 8598 .springer berlin heidelberg , 2002 .
|
the minimum common string partition problem is an np - hard combinatorial optimization problem with applications in computational biology . in this work we propose the first integer linear programming model for solving this problem . moreover , on the basis of the integer linear programming model we develop a deterministic 2-phase heuristic which is applicable to larger problem instances . the results show that provenly optimal solutions can be obtained for problem instances of small and medium size from the literature by solving the proposed integer linear programming model with cplex . furthermore , new best - known solutions are obtained for all considered problem instances from the literature . concerning the heuristic , we were able to show that it outperforms heuristic competitors from the related literature .
|
migration has entered the puzzling scenario of planetary formation as the favorite mechanism advocated to explain the extremely short orbital period of many extrasolar planets .it is known that any planet - like body is forced to adjust its distance from the central star because of gravitational interactions with the circumstellar material . however , the dispute about how fast migration proceeds is far from being over .numerical methods have been employed to evaluate gravitational torques exerted on embedded planets .we have performed a series of simulations modeling both two and three dimensional disks , varying the mass of the protoplanet in the range from 1 earth - mass to 1 jupiter - mass .the physics of the problem demands that the flow in the protoplanet s neighborhood should be accurately resolved . in order to achieve sufficient resolution ,even for very small planetary masses , we use a _ nested - grid _ technique ( dangelo , henning , & kley 2002a ) .this paper addresses the issues of flow circulation around protoplanets , orbital migration , and mass accretion .we assume the protostellar disk to be a viscous fluid ( viscosity ) and describe it through the navier - stokes equations ( kley , dangelo , & henning 2001 ; dangelo et al .the set of equations is integrated over a grid hierarchy , as shown in figure 1 .the planet is supposed to move on a circular orbit at around a solar - mass star .the disk has an aspect ratio and the mass within the simulated region is .a circumplanetary disk forms inside the roche lobe of massive as well as low - mass protoplanets .such structures are characterized by a two - arm spiral wave perturbation .they are detached from the circumstellar disk spirals , which arise outside of the planet s roche lobe .indicating with the distance from the planet normalized to , for a wide range of planetary masses the spiral pattern can be approximated to ,\ ] ] where and .the ratio represents the mach number of the circumplanetary flow .figure 2 ( left panel ) demonstrates how equation ( 1 ) fits to the spiral perturbation around a uranus - mass planet .even an earth - mass planet induces two weak spirals , which wrap around the star for , but no circumplanetary disk is observed ( figure 2 , right panel ) .= 0.428 = 0.428= 0.45 = 0.45 = 0.45 = 0.45 a more complete description of the flow near protoplanets is provided by 3d computations ( dangelo , kley , & henning 2002b ) . the major differences between 2d and 3d modeling arise in the vicinity of the planet because the latter can account for the vertical circulation in the circumplanetary disk . instead, the two geometries decently agree on length scales larger than the disk scale height .two examples of our simulations are illustrated in figure 3 .the images represent the logarithm of the density close to a saturn - mass planet ( left ) and neptune - mass planet ( right ) , in two orthogonal planes ( see figure caption for details ) .the velocity field is overplotted to display the flow features . from the top panels it is clear that the spiral perturbations are weaker and more open than in 2d simulations .the bottom panels indicate the presence of vertical shock fronts , which are generally located outside the hill sphere of the protoplanet .in general , torques exerted by disk material cause a protoplanet to migrate toward the star ( ward 1997 ) . yet , nearby matter can be very efficient at slowing down its inward motion ( tanaka , takeuchi , & ward 2002 ; dangelo et al .the migration time scale can be defined as , where the migration drift is directly proportional to the total torque acting on the planet .since we also measure the rate at which the planet accretes matter from its surroundings , an accretion time scale can be introduced as well : .some of our 2d and 3d outcomes for both time scales are shown in figure 4 .circumplanetary disk forms around protoplanets .the spiral wave pattern which marks such disks is less accentuated when the full 3d structure is simulated .vertical shock fronts develop outside the hill sphere of the planet .the estimated values of in 3d are longer than those predicted by analytical linear theories because of non - linearity effects .when , both and are longer in 3d computations then they are in 2d ones .dangelo , g. , henning , th . , & kley , w. 2002a , , 385 , 647 dangelo , g. , kley , w. , & henning , th .2002b , , submitted kley , w. , dangelo , g. , & henning , th .2001 , , 547 , 457 tanaka , h. , takeuchi , t. , & ward , w. 2002 , , 565 , 1257 ward , w. 1997 , icarus , 126 , 261
|
planet evolution is tightly connected to the dynamics of both distant and close disk material . hence , an appropriate description of disk - planet interaction requires global and high resolution computations , which we accomplish by applying a nested - grid method . through simulations in two and three dimensions , we investigate how migration and accretion are affected by long and short range interactions . for small mass objects , 3d models provide longer growth and migration time scales than 2d ones do , whereas time lengths are comparable for large mass planets . # 1_#1 _ # 1_#1 _ = # 1 1.25 in .125 in .25 in
|
the potential of emerging technologies such as fuel cells ( fcs ) and photovoltaics for environmentally - benign power generation has sparked renewed interest in the development of novel materials for high - density energy storage .for mobile applications such as in the transportation sector , the demands placed upon energy storage media are especially stringent, as the leading candidates to replace fossil - fuel - powered internal combustion engines ( ices)proton exchange membrane fcs and hydrogen - powered ices ( h-ices)rely on h as a fuel .although h has about three times the energy density of gasoline by weight , its volumetric density , even when pressurized to 10,000 psi , is roughly six times less than that of gasoline .consequently , safe and efficient storage of h has been identified as one of the key scientific obstacles to realizing a transition to h-powered vehicles .perhaps the most promising approach to achieving the high h needed for mobile applications is via absorption in solids. metal hydrides such as lani have long been known to reversibly store hydrogen at volumetric densities surpassing that of liquid h , but their considerable weight results in gravimetric densities that are too low for lightweight applications. accordingly , recent efforts have increasingly focused on low - z complex hydrides , such as metal borohydrides , (bh) , where represents a metallic cation , as borohydrides have the potential to store large quantities of hydrogen ( up to 18.5 wt.% in libh ) .nevertheless , the thermodynamics of h-desorption from known borohydrides are generally not compatible with the temperature - pressure conditions of fc operation : for example , in libh strong hydrogen - host bonds result in desorption temperatures in excess of 300. thus the suitability of libh and other stable hydrides as practical h-storage media will depend upon the development of effective destabilization schemes .building on earlier work by reilly and wiswall, vajo _ _et al.__ recently demonstrated that libh can be destabilized by mixing with mgh . in isolation , the decomposition of these compounds proceeds according to : [ pure_rxns ] yielding 13.6 and 7.6 wt.% h , respectively , at temperatures above 300 .the high desorption temperatures are consistent with the the relatively high enthalpies of desorption : 67 ( libh ) and ( mgh ) kj/(mol h). by mixing libh with mgh , for the combined reaction can be decreased below those of the isolated compounds due to the exothermic formation enthalpy of mgb : that is , formation of the mgb product _ stabilizes _ the dehydrogenated state in eq .[ destab ] relative to that of eq .[ pure_rxns ] , thereby _ destabilizing _ both libh and mgh . by adopting this strategy , measured isotherms for the libh + mixture over 315400 exhibited a 25 kj / mol h decrease in relative libh alone , with an approximately tenfold increase in equilibrium h. in addition , the hydride mixture was shown to be reversible with a density of 810 wt.% h. nevertheless , the extrapolated temperature of at which bar is still too high for mobile applications , and suggests that _ additional _ destabilization is necessary .the concept of thermodynamic destabilization appears to offer new opportunities for accessing the high h content of strongly - bound hydrides .however , the large number of known hydrides suggests that experimentally testing all possible combinations of known compounds would be impractical ; thus a means for rapidly screening for high - density h-storage reactions with appropriate thermodynamics would be of great value . towards these ends , here we employ first - principles calculations to identify new h-storage reactions with favorable temperature - pressure characteristics based on destabilizing libh and ca(bh) by mixing with selected metal hydrides .our goal is to determine whether additional destabilization of libh and ca(bh) that demonstrated with libh/mgh possible by exploiting the exothermic formation enthalpies of the metal borides .we focus specifically on thermodynamic issues since appropriate thermodynamics is a necessary condition for any viable storage material , and thermodynamic properties are not easily altered .while kinetics must also be considered , catalysts and novel synthesis routes have been shown to be effective at improving reversibility and the rates of h uptake / release. by screening through distinct reactions , we identify four new destabilized mixtures having favorable gibbs free energies of desorption in conjunction with high gravimetric ( 59 wt.% ) and volumetric ( 85100 g h/l ) storage densities .the predicted reactions present new avenues for experimental investigation , and illustrate that compounds with low gravimetric densities ( i.e. , transition metal hydrides ) may yield viable h-storage solutions when mixed with lightweight borohydrides .an advantage of the present approach is that it relies only on known compounds with established synthesis routes , in contrast to other recent studies which have proposed h-storage reactions based on materials which have yet to be synthesized. an additional distinguishing feature of this study is the development of a set of thermodynamic guidelines aimed at facilitating more robust predictions of hydrogen storage reactions .the guidelines are used to vet the present set of candiate reactions , and to illustrate how other reactions recently reported in the literature are thermodynamically unrealistic . in total, this exercise reveals some of the common pitfalls that may arise when attempting to simply `` guess '' at reaction mechanisms .our first - principles calculations were performed using a planewave - projector augmented wave method ( vasp) based on the generalized gradient approximation to density functional theory .all calculations employed a planewave cutoff energy of 400 ev , and k - point sampling was performed on a dense grid with an energy convergence of better than 1 mev per supercell . internal atomic positions and external cell shape /volume were optimized to a tolerance of better than 0.01 ev / .thermodynamic functions were evaluated within the harmonic approximation, and normal - mode vibrational frequencies were evaluated using the so - called direct method on expanded supercells. further information regarding the details and experimental validation of our calculations can be found elsewhere. our search for high - density h-storage reactions is based on a series of candidate reactions that are analogous to eq . [ destab ] : where = li or ca [ = 1 ( 2 ) for li ( ca ) ] , represents a metallic element , and the coefficients and are selected based on the stoichiometries of known hydrides and borides . to maximize gravimetric density we limit to relatively light - weight elements near the top of the periodic table . in the case of = li , the enthalpy of eq .[ gen_eqn ] per mol h can be expressed as : \ ] ] where are the desorption ( formation ) enthalpies of the respective hydrides ( borides ) per mol h ( ) .thus for the destabilized libh reaction is simply an average of the hydride desorption enthalpies , less the enthalpy of boride formation .[ cols= " < , < , > , > , > , > , > , > " , ] table [ table ] lists theoretical h densities , and calculated dehydrogenation enthalpies and entropies for several potential h-storage reactions .reactions 122 enumerate the candidate new reactions , while reactions 2327 are included in order to validate the accuracy of our predictions by comparing with experimentally - measured enthalpies and previous first - principles results ( shown in parentheses ) . turning first to the reactions from experiment ( 2427 ) ,it is clear that the calculated k enthalpies are generally in good agreement with the measured data .as mentioned above , reaction 24 was studied by vajo and co - workers ( see eq .[ destab ] ) .our calculated enthalpy of 50.4 kj / mol h the experimental value by kj / mol .however , since the experimental measurements were made at temperatures ( 315400 ) above the libh melting point ( 268), ) and our calculations are with respect to the ground state _ pnma _crystal structure, we expect due to the higher enthalpy of the liquid state .we begin our discussion of the candidate reactions by commenting on the vibrational contributions ( ) of the solid state phases to the total dehydrogenation entropy , .based on the notion that is largely due to the entropy of h [ /(mol k ) at 300 k ] , a dehydrogenation enthalpy in the approximate range of 2050 kj / mol h would yield desorption pressures / temperatures that are consistent with the operating conditions of a fc. however , as shown in the last column of table [ table ] , the calculated are not negligible ( up to 21% ) in comparison to , calling into question the assumption and the guideline = 2050 kj / mol h .this suggests that a precise determination of the pressure - temperature characteristics of a given desorption reaction requires evaluating the change in gibbs free energy [ , accounting explicitly for the effects of temperature and , as done below .a key concern when attempting to predict favorable hydrogen storage reactions is to ensure that the thermodynamically preferred reaction pathway has been identified .this is a non - trivial task , and our experience has shown that intuition alone is not sufficient to correctly identify realistic reactions involving multicomponent systems. in this regard , several of the reactions in table [ table ] ( denoted by ) are noteworthy as they illustrate the difficulties that may arise when `` guessing '' at reactions .for example , all of the candidate reactions are written as simple , single - step reactions .while this may seem reasonable given the mechanism proposed in ref .( eq . [ destab ] ) and its generalization in eq .[ gen_eqn ] , as we discuss below , some of these reactions should proceed via multiple step pathways , with each step having thermodynamic properties that are distinct from the presumed single - step pathway .we group the examples of how chemical intuition might fail into three categories , and for each class , give a general guideline describing the thermodynamic restriction : _ ( 1 ) reactant mixtures involving `` weakly - bound '' compounds : _ we refer here to systems where the enthalpy to decompose one ( or more ) of the reactant phases is less than the enthalpy of the proposed destabilized reaction ; thus , the weakly - bound phase(s ) will decompose before ( i.e. , at a temperature below that which ) the destabilized reaction can proceed .two examples of this behavior can be found in table [ table ] .the first case pertains to reactions 1316 , which , based on their larger enthalpies relative to reaction 12 , would appear to `` stabilize '' ca(bh) . in reality, ca(bh) will decompose before ( with bar at = 88 ) any of the higher temperature reactions 1316 will occur ( 110 ) , indicating that it is impossible to stabilize a reaction in this manner .additional examples of this scenario occur in reactions 1 , 8 , 17 , and 21 , which involve the metastable alh and crh phases . in the case of reaction 1 , alh decompose first ( yielding al and ) , followed by reaction of al with libh ( reaction 2 ) .the consequences of this behavior are significant , since although the intended reaction 1 has an enthalpy ( kj / mol h ) in the targeted range , in reality the reaction will consist of two steps , the first of which has an enthalpy below the targeted range ( alh decomposition ) , while the second ( reaction 2 ) has an enthalpy above this range ._ guideline 1 : the enthalpy of the proposed destabilized reaction must be less than the decomposition enthalpies of the individual reactant phases . _ _ ( 2 ) unstable combinations of product or reactant phases : _ reaction 4 illustrates how the seemingly straightforward process of identifying stable reactant and product phases can become unexpectedly complex . here , the starting mixture of libh and mg is unstable and will undergo the exothermic transformation : which will consume the available mg and form mgh , which will itself react endothermically with the remaining libh according to reaction 24 . the exothermic nature of eq .( [ eq : exolibh ] ) can be understood by noting that the enthalpy of reaction 4 ( 46.4 kj / mol h ) is lower than the decomposition enthalpy of mgh , given by reaction 27 ( 62.3 kj / mol h ) .therefore , the total energy can be lowered by transferring hydrogen to the more strongly bound mgh compound ._ guideline 2 : if the proposed reaction involves a reactant that can absorb hydrogen ( such as an elemental metal ) , the formation enthalpy of the corresponding hydride can not be greater in magnitude than the enthalpy of the destabilized reaction . _ _ ( 3 ) lower - energy reaction pathways : _ reaction 3 , involving a 4:1 mixture of libh:mgh , as well as the related reaction involving a 7:1 stoichiometry , 7libh + mgh mgb + 7lih + 11.5h , were recently suggested in ref . , which considered only a single - step mechanism resulting in the formation of mgb and mgb , respectively .here we demonstrate that these reactions will not proceed as suggested there due to the presence of intermediate stages with lower energies .in fact , both hypothetical reactions have larger enthalpies [ = 69 ( 4:1 ) and 74 ( 7:1 ) kj / mol h ] than the 2:1 mixture ( reaction 24 ) , suggesting that , upon increasing temperature , the 4:1 and 7:1 mixtures will follow a pathway whose initial reaction step is the 2:1 reaction ( reaction 24 ) , which will consume all available mgh .subsequent reactions between unreacted libh and newly - formed mgb will become thermodynamically feasible at temperatures above that of reaction 24 , since their enthalpies exceed 50 kj / mol h .[ similar behavior is expected for reactions 9 & 10 , as the 1:1 mixture of libh:fe ( reaction 9 ) will initially react in a 1:2 ratio ( reaction 10 ) , which has a lower enthalpy . ] _ guideline 3 : in general , it is not possible to tune the thermodynamics of destabilized reactions by adjusting the molar fractions of the reactants .there is only one stoichiometry corresponding to a single - step reaction with the lowest possible enthalpy ; all other stoichiometries will release h in multi - step reactions , where the initial reaction is given by the lowest - enthalpy reaction_. .the region within the dashed box corresponds to desirable temperatures and pressures for on - board hydrogen storage : = 1700 bar , = -40100 . ] in total , the preceding examples reveal that great care must be taken in predicting hydrogen storage reactions .having ruled out the specious reactions , we now discuss the thermodynamics of the remaining reactions . using the calculated thermodynamic data ( table [ table ] ) as input to the vant hoff equation , , where bar , fig .[ fig1 ] plots the equilibrium h desorption pressures of these reactions as a function of temperature. structural transition at , which should reduce the slope of the data in fig .[ fig1 ] for . ] included in the plot is a rectangle delineating desirable temperature and pressure ranges for h storage : -40100 , and 1700 bar .as expected , our vant hoff plot confirms that the experimental reactions having large dehydrogenation enthalpies ( reactions 2427 ) yield pressures bar , even at elevated temperatures . on the other hand , some of the candidate reactions , for example 5 and 19 ,readily evolve h at very low temperatures ( consistent with their low enthalpies ) and are therefore too weakly bound for practical , reversible on - board storage .however , the candidate reactions involving mixtures with sch ( reactions 7 and 18 ) and cr ( reactions 11 and 22 ) desorb h in - regimes that strongly intersect the window of desirable operating conditions .these reactions have room - temperature enthalpies in the range of 2733 kj / mol h , relatively high h densities ( 58.9 wt.% h and 85 - 100 g h/l ) , and achieve bar at moderate temperatures ranging from 26 and .thus , via a first - principles approach of rapid screening through a large number of candidate reactions , and the careful use of thermodynamic considerations to eliminate unstable or multi - step reactions , we predict here several reactions with attributes that surpass the state - of - the - art for reversible , low - temperature storage materials .in conclusion , using first - principles free energy calculations we have demonstrated that further significant destabilization of the strongly - bound libh and ca(bh) borohydrides is possible , and we identify several high h-density reactions having thermodynamics compatible with the operating conditions of mobile h-storage applications .unlike other recent predictions , the proposed reactions utilize only known compounds with established synthesis routes , and can therefore be subjected to immediate experimental testing . in addition , we provide guidance to subsequent efforts aimed at predicting new h storage materials by illustrating common pitfalls that arise when attempting to `` guess '' at reaction mechanisms , and by suggesting a set of thermodynamic guidelines to facilitate more robust predictions .
|
we propose a set of thermodynamic guidelines aimed at facilitating more robust screening of hydrogen storage reactions . the utility of the guidelines is illustrated by reassessing the validity of reactions recently proposed in the literature , and through vetting a list of more than 20 candidate reactions based on destabilized libh and ca(bh) borohydrides . our analysis reveals several new reactions having both favorable thermodynamics and relatively high hydrogen densities ( ranging from 5 - 9 wt.% h & 85 - 100 g h/l ) , and demonstrates that chemical intuition alone is not sufficient to identify valid reaction pathways .
|
multiple - input multiple - output ( mimo ) systems refer to systems with multiple antennas implemented at the transceiver nodes .they exploit spatial diversity to provide high data rate and link reliability . in conventional mimo systems ,the number of antennas is usually moderate ( e.g. , the lte standard allows for up to 8 antenna ports ) .recently , large - scale mimo systems or massive mimo , where hundreds of antennas are implemented at the transceiver nodes , attract a lot of attention .it has been shown that , due to the large scale , the antennas can form sharp beams toward desired terminals , thus providing high spectral and energy efficiencies . besides , the effects of small - scale fading and interference can be significantly reduced with linear signal processing , such as maximal - ratio - combining ( mrc ) , maximal - ratio - transmission ( mrt ) , and zero - forcing ( zf ) .the performance of massive mimo systems have been widely studied in the literature . in , for the uplink of massive mimo systems with mrc or zf , the deterministic equivalence of the achievable sum - rate is derived by using the law of large numbers .the following power scaling laws are shown . with perfect channel state information ( csi ) , the user and/or relay power can be scaled down linearly with the number of antennas while maintaining the same signal - to - interference - plus - noise - ratio ( sinr ) ; when there is csi error ( where minimum mean - squared error ( mmse ) estimation is used ) and the training power equals the data transmit power , the power can only be scaled down by the square root of the number of antennas .another work on the energy efficiency and power efficiency of a single - cell multi - user massive mimo network is reported in , where a bayesian approach is used to obtain the capacity lower bounds for both mrt and zf precodings in the downlink .it is shown that that for high spectral efficiency and low energy efficiency , zf outperforms mrt , while at low spectral efficiency and high energy efficiency the opposite holds . while the channel models used in are rayleigh fading , ricean fading channel is considered in in massive mimo uplink , where the csi is also obtained with mmse estimator .sum - rate approximations on the mrc and zf receivers are obtained using the mean values of the components in the sinr formula .the derived power scaling law is that when the csi is perfect or ricean factor is non - zero , the user transmit power can be scaled down inversely proportional with the number of antennas while maintaining the same sinr level .otherwise , the transmit power can only be scaled down inversely proportional to the square root of the antenna number . while the aforementioned work analyses the sum - rate and power scaling law , there are also some work on the sinr distribution and outage probability . in , the sinr probability density function ( pdf ) of mrt precoding is derived in closed - form in the downlink of a single - cell multi - user massive mimo network .besides , the asymptotic sinr performance is analysed when the number of users remains constant or scales linearly with the number of antennas .for the same network , in , the outage probability of mrt precoding is derived in closed - form .the authors first obtain the distribution of the interference power , based on which the outage probability is derived in closed - form .while only small - scale fading is considered in , both small - scale ( rayleigh ) fading and large - scale ( log - normal ) fading are considered in . in this work ,the pdf of the sinr of mrc receiver is approximated by log - normal distribution , and the outage probability is derived in closed - form .the analysis shows that the shadowing effect can not be eliminated by the use of a large number of antennas .current results on massive mimo show fantastic advantages of utilizing a large number of antennas in communications. a natural expansion of the single - hop massive mimo systems is the two - hop massive mimo relay networks , where the relay station is equipped with a large number of transmit and receive antennas to help the communications of multiple source - destination pairs .relaying technology has been integrated to various wireless communication standards ( e.g. , lte - advanced and wimax release 2 ) as it can improve the coverage and throughput of wireless communications .early studies focus on single - user relay networks and various relaying schemes , such as amplify - and - forward ( af ) and decode - and - forward ( df ) , have been proposed . with ever - increasing demands for higher performance , recently , multi - user relay networks have gained considerable attention . an important issue in multi - user relaying is how to deal with inter - user interference . by utilizing massive mimo ,the interference is expected to be significantly reduced and the network performance will be significantly improved .research activities on massive mimo relay networks are increasing in recent years . in , for a single - user massive mimo relay network with co - channel interferences at the relay , the ergodic capacity and outage probability of mrc / mrt and zf relaying schemes are derived in closed - forms .the more general multiple - user massive mimo relay networks are analysed in .depending on the structure of the network model , the works can be divided to the following two categories . in , a network with multiple single - antenna users , one massive mimo relay station and one massive mimo destination is considered .this model applies to the relay - assisted uplink multiple - access network . in , it is shown that with perfect csi , and infinite relay and destination antennas , the relay or user transmit power can scale inversely proportional to the number of antennas without affecting the performance .when there is csi error , the user or relay power can only scale down with the square root of the number of antennas , given that the training power equals the transmit power .the same network is also considered in while the co - channel interference and pilot contamination are considered in , and channel aging effect is considered in .the effects of these factors on the power scaling are shown therein .another type of network is the relay - assisted multi - pair transmission network , where multiple single - antenna sources communicate with their own destinations with the help of a massive mimo relay . in ,the sum - rates of multi - pair massive mimo relay network with mrc / mrt and zf relaying under perfect csi are analysed for one - way and two - way relaying respectively . in both work , with the deterministic equivalence analysis, it is shown that the sum - rate can remain constant when the transmit power of each source and/or relay scales inversely proportional to the number of relay antennas . in , the same network model as considered for mrc / mrt relaying where the number of relay antennas is assumed to be large but finite .the analysis shows that , when the transmit powers of the relay and sources are much larger than the noise power , the achievable rate per source - destination pair is proportional to the logarithm of the number of relay antennas , and is also proportional to the logarithm of the reciprocal of the interferer number . in ,the full - duplex model is considered for one - way mrc / mrt relaying and a sum - rate lower bound is derived with jensen s inequality .while the above work assume perfect csi at the relay , recent study has turned to networks with csi error , which is more practical and challenging to analyse . in , a one - way massive mimo relay network model is considered , where mmse estimation is used to obtain the csi . while uses zf relaying and assumes that the csi error exists in both hops , uses mrc / mrt relaying and assumes that the csi error only exists in the relay - destination hop . in both work ,the power scalings of the sources and relay for non - vanishing sinr are discussed under the assumption that the training power equals the data transmission power . compared with previous power scaling law results ,the analysis in are more comprehensive by allowing the power scaling to be anywhere between constant and linearly increasing with the number of relay antennas . is on a two - way mrc / mrt relaying network with csi error . with deterministic equivalence analysis, it is shown that when the source or relay power scales inversely proportional to the number of relay antennas , the effects of small - scale fading , self - interference , and noise caused by csi error all diminish . in this work, the performance of mrc / mrt relaying in a one - way massive mimo relay network with csi error is investigated .our major differences from existing work are summarized as blow . *our system model is different from all the aforementioned existing work in relaying scheme , csi assumption , or communication protocol .the work with the closest model is , where the csi error is assumed to exist in the relay - destinations hop only .we use a more general model where csi error exists in both hops . * in our scaling law analysis , a general model for network parameters ,including the number of source - destination pairs , the csi quality parameter , the transmit powers of the source and the relay , is proposed . in this model ,the scale exponent with respect to the relay antenna number can take continuous values from 0 to 1. in most existing work , only a few discrete values for the power scaling , e.g. , , are allowed .although allow continuous exponent values , they constrains the number of sources as constant and the training power equals to the transmit power . *while in existing work , the asymptotically deterministic equivalence analysis is based on the law of large numbers , we use the quantized measure , squared coefficient of variation ( scv ) , to examine this property .as law of large numbers only applies to the summation of independent and identical distributed random variables , by using the scv , we can discuss the asymptotically deterministic property of random variables with more complex structures .based on these features that distinguish our work from existing ones , our unique contributions are listed as below . 1. firstly , by deriving a lower bound on the sum - rate , we investigate the performance scaling law with respect to the relay antenna number for a general setting on the scalings of the network parameters .the law provides comprehensive insights and reveals quantitatively the tradeoff among different system parameters .deterministic equivalence is an important framework for performance analysis of massive mimo systems .we derive a sufficient condition on the parameter scales for the sinr to be asymptotically deterministic .compared with existing work , where only specific asymptotic cases are discussed , our derived sufficient condition is more comprehensive .it covers all cases in existing works , and shows more asymptotically deterministic sinr scenarios .besides , for the sinr to be asymptotically deterministic , the tradeoff between different parameter scales is also discussed .3 . through the scaling law results ,we show that for practical network scenarios , the average sinr is at the maximum linearly increasing with the number of relay antennas .we prove that the sufficient and necessary condition for it is that all other network parameters remain constant .furthermore , our work shows that in this case the interference power does not diminish and it dominates the statistical performance of the sinr . by deriving the pdf of the interference power in closed - form, expressions for outage probability and average bit error rate ( aber ) are obtained .while existing work mainly focus on the constant sinr case , this linearly increasing sinr case , suitable for high quality - of - service applications , has not been studied .the remaining of the paper is organized as follows . in the next section , the system model including both the channel estimation and data transmission under mrc / mrt relaying is introduced .then the performance scaling law is analyzed in section [ sec : scaling ] . in section [ sec : deter ] , the asymptotically deterministic sinr case is discussed .the linearly increasing sinr case is investigated in section [ sec : linear ] .section [ sec : simu ] shows the simulation results and section [ sec : con ] contains the conclusion .we consider a multi - pair relay network with single - antenna sources ( ) , each transmitting to its own destination .that is , sends information to destination , .we assume that the sources are far away from the destinations so that no direct connections exist . to help the communications , a relay stationis deployed .the number of antennas at the relay station , , is assumed to be large , e.g. , a few hundreds .in addition , we assume because under this condition , simple linear relay processing , e.g. , mrc / mrt , can have near optimal performance in massive mimo systems .denote the and channel matrices of the source - relay and relay - destination links as and , respectively .the channels are assumed to be independent and identically distributed ( i.i.d . ) rayleigh fading , i.e. , entries of and are mutually independent following the circular symmetric complex gaussian ( cscg ) distribution with zero - mean and unit - variance , denoted as .the assumption that the channels are mutually independent is valid when the relay antennas are well separated .the information of and is called the channel state information ( csi ) , which is essential for the relay network . in practice, the csi is obtained through channel training . due to the existence of noises and interference , the channel estimation can not be perfect but always contains error .the csi error is an important issue for massive mimo systems . in what follows, we will first describe the channel estimation model , then the data transmission and mrc / mrt relaying scheme will be introduced . to combine the received signals from the sources andprecode the signals for the destinations , the relay must acquire csi . , the uplink channel from the sources to the relay , can be estimated by letting the sources send pilots to the relay . in small - scale mimo systems , can be estimated by sending pilots from the relay to the destinations and the destinations will feedback the csi to the relay . however, this strategy is not viable for massive mimo systems , as the training time length grows linearly with the number of relay antennas , which may exceed the channel coherence interval .consequently , to estimate , we assume a time - division - duplexing ( tdd ) system with channel reciprocity .so pilots are sent from the destinations and the relay - destination channels can be estimated at the relay station . without loss of generality ,we elaborate the estimation of , and the estimation of is similar .since the channel estimation is the same as that in the single - hop mimo system , we will briefly review it and more details can be found in and references therein .denote the length of the pilot sequences as .for effective estimation , is no less than the number of sources .assume that all nodes use the same transmit power for training , which is denoted as .therefore , the pilot sequences from all sources can be represented by a matrix , which satisfies .the received pilot matrix at the relay is where is the noise matrix with i.i.d . elements .the mmse channel estimation is considered , which is widely used in the channel estimation of massive mimo networks . the mmse estimation of given is where , which has i.i.d . elements .similarly , the mmse estimation of is define and which are the estimation error matrices . due to the feature of mmse estimation , and , and are mutual independent .elements of and are distributed as .elements of and are distributed as .define so is total energy spent in training . is the power of the estimated channel element , representing the quality of the estimated csi , while is the power of the csi error .it is straightforward to see that . when , the channel estimation is nearly perfect .when , the quality of the channel estimation is very poor .note that , different combinations of and can result in the same . for the majority of this paper , will be used in the performance analysis instead of and .this allows us to isolate the training designs and focus on the effects of csi error on the system performance .when we consider special cases with popular training settings , e.g. , and the same training and data transmission power , and will be used instead of in modelling the csi error . with the estimated csi ,the next step is the data transmission .various relay technologies have been proposed . for massive mimo systems ,the mrc / mrt relaying is a popular one due to its computational simplicity , robustness , and high asymptotic performance . in the rest of this section, the data transmission with mrc / mrt relaying will be introduced .denote the data symbol of as and the vector of symbols from all sources as . with the normalization , we have , where represents the hermitian of a matrix or a vector .let be the average transmit power of each source .the received signal vector at the relay is where is the noise vector at the relay with i.i.d .entries each following . with mrc / mrt relaying , the retransmitted signal vector from the relayis , where is to normalize the average transmit power of the relay to be . with straightforward calculations, we have where the approximation is made by ignoring the lower order terms of . denote , , and as the columns of , and respectively ; , and as the rows of , and respectively .the received signal at can be written as follows . where is the noise at the destination following . equation ( [ eq : recei_sig_e ] ) shows that the received signal is composed of 5 parts : the desired signal , the multi - user interference , the forwarded relay noise , the csi error term , and the noise at .define from ( [ eq : recei_sig_e ] ) , we know that and are the normalized powers of the signal , the interference , the forwarded relay noise , and the noise due to csi error respectively . with these definitions ,the sinr of the source - destination pair can be written as the achievable rate for the source - destination pair is this paper is on the performance behaviour and asymptotic performance scaling law of the massive mimo relay network .it is assumed throughout the paper that the number of relay antennas is very large and the scaling law is obtained by studying the highest - order term with respect to . due to the complexity of the network , it is impossible to rigorously obtain insightful forms for the sinr and the achievable rate for the general case .instead , we find the asymptotic performance properties for very large with the help of lindebergy - l central limit theorem ( clt ) .the clt states that , for two length- independent column vectors and , whose elements are i.i.d .zero - mean random variables with variances and , {d}\mathcal{cn}(0,\sigma_1 ^ 2\sigma_2 ^ 2),\ ] ] where {d} ] , the random variables , , , , , all have bounded means . from ( [ eq : scv1 ] ) , we know that is asymptotically deterministic since its scv approaches to 0 as . furthermore , the decreasing rate of its scv is linear in , showing a fast convergence rate .thus , for large , we can approximate it with its mean value . while for the rest components in the sinr , their scvs depend on the scalings of network parameters ( such as and ) , which do not necessarily converge to .we can not assume they are asymptotically deterministic so far .with the aforementioned approximation , the sinr expression becomes with this simplification , the following result on the sum - rate can be obtained .the achievable rate of source in the massive mimo relay network has the following lower bound : where [ lemma - rate ] as is a convex function of , according to jensen s inequality , we have by applying the sinr approximation in ( [ eq : sinr_e_app ] ) , we have +\frac{1}{mpp_c}+\frac{k}{m^2pp_c^2}+\frac{k}{m^2}(\frac{1}{p_c}-1)^2+\frac{2(1-p_c)}{mp_c}+\frac{k(1+\frac{k}{mp_c}+\frac{1}{pp_cm})}{mp_cq}},\nonumber\\ & \approx & \frac{1}{\frac{2k}{mp_c}+\frac{k^2}{m^2p_c^2}+\frac{1}{mpp_c}+\frac{k}{m^2pp_c^2}+\frac{k}{mp_cq}+\frac{k^2}{m^2p_c^2q}+\frac{k}{m^2pp_c^2q } } = \widetilde{\rm sinr}_{i } , \end{aligned}\ ] ] where the approximation is made by ignoring the lower order terms of when .thus the lower bound in ( [ rate - lb ] ) is obtained . from ( [ rate - lb ] ) and ( [ eq : sinr_exp ] ), we can see that the achievable rate lower bound increases logarithmically with and .but its increasing rates with , , are slower than logarithmic increase .note that , by using the method in lemma 1 of , the sum - rate expression in ( [ rate - lb ] ) can also be obtained .but with the method in , the derived expression is an approximation , while our derivations show that it is a lower bound for large . on the other hand , from lemma 1 of , we know that the lower bound becomes tighter when the number of relay antennas or the number of sources increases .the parameter has the physical meaning of asymptotic effective sinr corresponding to the achievable rate lower bound . due to the monotonic relationship in ( [ rate - lb ] ) , to understand the scaling law of the achievable rate is equivalent to understanding the scaling law of .now , the scaling law of the asymptotic effective sinr , , will be analysed to show how the system performance is affected by the size of the relay antenna array and other network parameters . to have a comprehensive coverage of network setups and applications , for all system parameters including the number of source - destination pairs , the source transmit power , the relay transmit power , and the csi quality parameter , a general scaling model with respect to is used .assume that where the notation means that when , and have the same scaling with respect to .in other words , there exists positive constants and natural number , such that for all .thus the exponents , , , and represents the relative scales of , , , and with respect to . for practical ranges of the system parameters , we assume that .the reasons are given in the following .* the scale of . following typical applications of massive mimo , the number of users should increase or keep constant with the number of relay antennas .thus . on the other hand , the number of users can not exceed since the maximum multiplexing gain provided by the relay antennas is .thus , . * the scale of and . following the high energy efficiency and low power consumption requirements of massive mimo , the source and relay transmit power should not increase with the number of relay antennas .but they can decrease as the number of relay antennas increases with the condition that their decreasing rates do not exceed the increasing rate of the antenna number .this is because that the maximum array gain achievable from antennas is .a higher - than - linear decrease will for sure make the receive sinr a decreasing function of , which contradicts the promise of massive mimo communications .thus . *the scale of . from the definition of in ( [ eq : pc ] ), we have , thus .this is consistent with the understanding that the csi quality will not improve as the number of relay antennas increases , as the training process can not get benefits from extra antennas .on the other hand , since similar to the data transmission , the total training energy should not has lower scaling than , we conclude that should not have a higher scaling than .thus . in our parametermodelling , the exponents can take any value in the continuous range ] .in our asymptotically deterministic sinr analysis , the scale of the sinr is no larger than . while , it can be seen from ( [ snr - scaling ] ) that the maximum scale of the sinr with respect to the number of relay antennas , is , i.e. , linearly increasing with .this is a very attractive scenario for massive mimo relay networks , in the sense that when significant improvement in the network throughput and communication quality can be achieved .possible applications for such scenario are networks with high reliability and throughput requirement such as industrial wireless networks and high - definition video . in this section ,we study networks with linearly increasing sinr .first , the condition on the parameter scaling for the sinr to be linearly increasing is investigated .then we show that in this case the interference power is not asymptotically deterministic , but with a non - diminishing scv as .thus deterministic equivalence analysis does not apply and the small - scale effect needs to be considered in analyzing the performance .we first derive a closed - form pdf of the interference power , then obtain expressions for the outage probability and aber .their scalings with network parameters are revealed .[ pro : linearsinr ] when , the sufficient and necessary condition for the average sinr to scale as is , i.e. , the csi quality , the source transmit power , the relay power , and the number of users all remain constant . in this case , the sinr can be approximated as where . please see appendix c. proposition [ pro : linearsinr ] shows that for linearly - increasing sinr , the interference power is not asymptotically deterministic and does not diminish as increases .in addition , the randomness of the interference power is the dominant contributor to the random behaviour of the sinr . with this result , to analyse the outage probability and aber performance , the distribution of the interference needs to be derived . [pro : pdf ] define when , the pdf of has the following approximation : where is the pdf of gamma distribution with shape parameter and scale .it can also be rewritten into the following closed - form expression : .\hspace{1cm}\ ] ] please see appendix d. from ( [ equ4 ] ) , it can be seen that the interference power has a mixture of infinite gamma distributions with the same scale parameter which is but different shape parameters . butas ( [ equ4 ] ) is in the form of an infinite summation , it is manipulated into ( [ cf - pdf - e ] ) for further analysis . besides , when the csi quality is high , i.e. , , we have and thus and can be simplified by ignoring the term .compared with the perfect csi case where , the csi error makes smaller .outage probability is the probability that the sinr falls below a certain threshold . due to the complexity of relay communications ,the user - interference , and the large scale , the outage probability analysis of multi - user massive mimo relay networks is not available in the literature .the derived approximate pdf for the interference power in ( [ cf - pdf - e ] ) and the simplified sinr approximation in ( [ eq : sinr_app_high ] ) for linearly increasing sinr case allow the following outage probability derivation .let be the sinr threshold and define the outage probability of user can be approximated as when , from ( [ cf - pdf - e ] ) , we have where is the upper incomplete gamma function .this outage probability expression is too complex for useful insights .a simplified one is derived in the following proposition for systems with high csi quality .[ pro : outage_app ] define when and , we have by the definitions of and in ( [ eq : pc ] ) , when , we have . thus . further define when , we have and therefore then , from we know that with this approximation , the outage probability expression in ( [ outageprob ] ) can be reformulated as \ ] ] notice that as , . thus the second term in the bracket of the previous formula can be ignored , and the approximation in ( [ outate_app ] ) is obtained .we can see that the outage probability approximation in ( [ outate_app ] ) is tight when the number of relay antennas is much larger than the number of source - destination pairs and the training power and transmit powers are high .these conditions will result in a high received sinr .thus , the approximation in ( [ outate_app ] ) applies to the high sinr case .note that ( [ outate_app ] ) can also be obtained by deleting the second summation term in the pdf formula in ( [ cf - pdf - e ] ) and then integrating with the approximated pdf .this is because that , for the high sinr case , the outage probability is determined by the sinr distribution in the small sinr region , which is equivalently the high interference power region , corresponding to the tail of the pdf of the interference power .it can be seen from the pdf in ( [ cf - pdf - e ] ) that , the first term has a heavier tail , thus dominates the outage probability .now , we explore insights from ( [ outate_app ] ) . as are independent with or , the outage probability scales as with and scales as with .firstly , it shows the natural phenomenon that increasing or will decrease the outage probability .also , we can see that the outage probability curve with respect to has a sharper slope than that with .for example , let , doubling alone will shrink the outage probability by a factor of , while doubling alone will shrink the outage probability by a factor of , which is powers of the shrinkage of the doubling- case .furthermore , the outage probability will not diminish to zero as the user and relay transmit power increase .an error floor exists due to the user - interference . on the other hand ,increasing the number of relay antennas to infinity leads to faster decrease in the outage probability and makes it approach zero .note that in our analysis , we assume but does not go to infinity .so terms with are not treated as asymptotically small and thus are not ignored . if and , the terms can be seen as and we will have however , this asymptotic analysis is not practical because the number of massive mimo antennas is usually a few hundreds in practice , so that may not be much larger than other parameters such as .aber is anther important performance metric . due to the complexity of the sinr distribution , aber analysis of the massive mimo relay networkis not available in the literature . for the linearly increasing sinr case, the aber can be analyzed as below .denote the aber as .it is given by where is the conditional error probability and is the pdf of the sinr . for channels with additive white gaussian noise , for several gray bit - mapped constellations employed in practical systems , where is the complementary error function , and are constants depended on the modulation .for example , for bpsk , , . for the linearly increasing sinr case , with the pdf of the interference power in ( [ cf - pdf - e ] ) and the sinr approximation in ( [ eq : sinr_app_high ] ) ,the pdf of the sinr can be derived as below . by using ( [ pdf_sinr ] ) in ( [ eq : defpb ] ) , an approximation on the aber is derived in the following proposition .[ pro : ber ] when and , the aber can be approximated as the pdf of the sinr in ( [ pdf_sinr ] ) can be rewritten as .\label{pdf_sinr_1}\end{aligned}\ ] ] as the aber is determined by the pdf when is small , we consider the range . with and ,similarly as the proof of proposition [ pro : outage_app ] , we can show that , and thus this term can be ignored .the aber can be derived by solving . as the aber is determined by the region when is small, we replace the integration region with for a tractable approximation . by using , the integration formula , and ,the aber approximation in ( [ eq : ber_2 ] ) is obtained .we can see from ( [ eq : ber_2 ] ) that increasing will make the aber decrease and approach zero .besides , for very large the aber behaves as .as is known , the aber of traditional mimo system with transmit antennas and 1 receive antenna under rayleigh fading is .this shows different aber behaviour in the massive mimo relay network , where the aber decreases exponentially with respect to .if the diversity gain definition of traditional mimo system is used , the massive relay network will have infinite diversity gain .comparing ( [ eq : ber_2 ] ) with ( [ outate_app ] ) , we see that the aber and the outage probability has the same scaling with and respectively .thus , scaling analysis for the outage probability also applies to the aber .in addition , if the threshold is set as , the aber equals times the outage probability .thus , there is a simple transformation between the two metrics .in this section , simulation results are shown to verify the analytical results ..the network settings for figure 1 [ cols="^,^,^,^,^,^",options="header " , ] [ table ] for different network scenarios.,width=480 ] in fig .[ fig : scaling ] , the simulated average sinr with respect to the number of relay antennas is shown for the five network settings given in table [ table ] to verify the sinr scaling result in theorem [ thm-1 ] . in the table , is the floor function . for different settings of network parameters , their sinr scalings ( values )are calculated based on the sinr scaling law in ( [ snr - scaling ] ) and shown in the table .the first three cases have constant scaling . in case 4 and case 5 ,the average sinr scale as and .the figure verifies these scaling law results . and , db , .,width=480 ] in fig . [ fig - rate ] ,the average achievable rate per source - destination pair is simulated for different number of sources with or relay antennas .the source and the relay powers are set to be db .the csi quality is set as .we can see that the lower bound in ( [ rate - lb ] ) is very tight .with given number of relay antennas , the achievable rate per source - destination pair decreases as there are more pairs . or , , .,width=480 ] in fig .[ pdf_inter ] , for a relay network with or source - destination pairs and relay antennas , the simulated pdf of is shown .the csi quality parameter is set as .the analytical expression in ( [ cf - pdf - e ] ) is compared with the simulated values .we can see from fig .[ pdf_inter ] that the pdf approximation is tight for the whole parameter range .especially , the approximation matches tightly at the tail when the interference power is large , which is the dominate range of outage and aber .db , , .,width=480 ] fig .4 shows the outage probability for different number of relay antennas .the analytical expressions in ( [ outageprob ] ) and ( [ outate_app ] ) are compared with the simulated values .the transmit powers of the users and the relay are set as db . the csi quality parameter is set as . the number of sources is or and the sinr threshold is db .we can see that our analytical result in ( [ outageprob ] ) and the further approximation in ( [ outate_app ] ) are both tight for all the simulated parameter ranges .besides , the approximations becomes tighter as the relay antennas number increases . . db , .,width=480 ] in fig .[ fig : aber_m_ksmall ] , the aber for bpsk is simulated for different number of relay antennas with or , db and . the analytical approximation in ( [ eq : ber_2 ] ) is compared with the simulated values . from the figure, we can see that the analytical result in ( [ eq : ber_2 ] ) is tight for the simulated values , and is tighter when the number of source - destination pairs is smaller .in this work , we analysed the performance of a massive mimo relay network with multiple source - destination pairs under mrc / mrt relaying with imperfect csi .firstly , the performance scaling law is analysed which shows that the scale of the sinr is decided by the summation of the scales of the csi quality plus the larger of the per - source transmission power of the two hops . with this result ,typical scenarios and trade - off between parameters are shown .our scaling law is comprehensive as it takes into considerations many network parameters , including the number of relay antennas , the number of source - destination pairs , the source transmit power and the relay transmit power .then , a sufficient condition for asymptotically deterministic sinr is derived , based on which new network scenarios for systems with the asymptotically deterministic property are found and tradeoff between the parameters is analysed . at last , we specify the necessary and sufficient condition for networks whose sinr increases linearly with the number of relay antennas . in addition , our work show that for this case the interference power does not become asymptotically deterministic and derived the pdf of the interference power in closed - form . then the outage probability and aber expressions for the relay network are obtained and their behaviour with respect to network parameters are analysed .simulations show that the analytical results are tight .in the first term of ( [ appen_1 ] ) , as entries of and are i.i.d . whose distribution follows , and have a gamma distribution with shape parameter and scale parameter .thus , where the approximation is by ignoring lower order terms of when . for the remaining terms , where is the entry of , and is the entry of . thus can be seen the summation of terms of i.i.d .random variables , each with mean , variance . according to clt, the distribution of converges to when .then has a gamma distribution with shape parameter and scale parameter .thus , we can obtain as , we have .thus the mean of is .the received sinr is asymptotically deterministic when its scv approaches zero as .however , due to the complex structure of the sinr expression , it is highly challenging to obtain its scv directly . alternatively , as is shown in section iii , is asymptotically deterministic , thus for the sinr to be asymptotically deterministic , the sufficient and necessary condition is that the denominator of the formula in ( [ eq : asym ] ) is asymptotically deterministic .one sufficient condition is that the scv of the denominator denoted as , is no larger than for some constant , given any positive number , .but for practical applications of the deterministic equivalence analysis in large but finite - dimension systems , we consider the scenario that the scv decrease linearly with the number of antennas or faster .the derived condition is thus sufficient but not necessary . ] .this can be expressed as from ( [ eq : asym ] ) , we have and since , we have thus ( [ eq : scv ] ) is equivalent to that for some constant . [ lemma : var ]a sufficient condition for ( [ eq : var_scv ] ) is that the variance of each term in ( [ eq : var_scv ] ) scales no larger than , i.e. , the maximum scale order of , , , , and is no larger than .the variance of is the summation of two parts : the variances of each term , and the covariance of every two terms .now , we will prove that if the variances of each term scales no larger than , their covariance also scales no larger than .to make it general and clear , we define , where is a finite integer and s are random variables . without loss of generality , we assume that has the highest scale among all s and , where .the variance of is by the definition of covariance , takes the maximum value when s are linearly correlated , i.e. , . in this case, we can obtain that where we have defined .as has the highest scale , we have scales no higher than , that is , there exists constants s such that .thus , and consequently scales no higher than .given lemma [ lemma : var ] , we only need to find the condition for the variances of , , , , and to scale no larger than . using the results on the variances of sinr components ,the variances of the terms can be obtained as where the scaling behaviour at the end of each line is obtained from the definitions of the scaling exponents in ( [ exponents - def ] ) and considering the constraints in ( [ cond - scaling ] ) .then , we can see that the condition for the scaling order of each term to be no higher than is that both following constrains are satisfied . combining ( [ cond - scaling ] ) and ( [ eq : cond_2 ] ) , we get the sufficient condition for the sinr to be deterministic in ( [ eq : suff - con ] ) .linearly increasing sinr means that the sinr scaling exponent is 1 , i.e. , .thus the sinr can be formulated as from the sinr scaling law in ( [ snr - scaling ] ) , we can see that the sufficient and necessary condition for is ( note that $ ] ) . with the parameter values, we can calculate that the scvs of and scales of .therefore , they are asymptotically deterministic and can be approximated with their mean values .on the other hand , the scvs of , , , and are constant .we analyze their behaviour next . also , and have the same distribution . as we mainly consider the non - trivial case that , we have , especially when the csi quality is high .besides , the mean of scales as , and its variance scales as .thus the variance of this term is also much smaller than .therefore , dominates the random behaviour of the sinr and other terms can be approximated with their mean values .thus the sinr approximation in ( [ eq : sinr_app_high ] ) is obtained , where only dominant terms of are kept .when , then , using clt , has an exponential distribution with parameter .then , the pdf can be approximated as , which is the same as ( [ cf - pdf - e ] ) for .now , we work on the more complicated case of .firstly , with the help of clt , as , is approximately distributed as , and is approximately distributed as .we can further show that the covariances between , , and are zero , thus they are uncorrelated . for tractable analysis , we assume independence as they are gaussian distributed .now we conclude that has a gamma distribution with shape parameter and scale parameter , which is also defined as . using clt , the covariance between and ( ) can be derived as where the proof is omitted due to save space .the correlation coefficient between the two is subsequently it equals based on the definition in ( [ eq : miu ] ) .thus is a summation of correlated random variables following the same gamma distribution . from corollary 1 of ,the pdf of is where are the ordered eigenvalues of the matrix , whose diagonal entries are and off - diagonal entries are , and s are defined iteratively as \delta_{j+1-m}.\label{deltai+1}\end{aligned}\ ] ] as is a circulant matrix whose off - diagonal entries are the same , its eigenvalues can be calculated as then we can show that substituting and into , we can get pdf of as in ( [ equ4 ] ) in proposition [ pro : pdf ] .notice that by straightforward calculations , we can obtain the closed - form pdf of in ( [ cf - pdf - e ] ) .f. rusek , d. persson , b. k. lau , e. g. larsson , t. l. marzetta , o. edfors , and f. tufvesson , `` scaling up mimo : opportunities and challenges with very large arrays , '' _ ieee signal process . mag ._ , vol . 30 , no .1 , pp . 4060 , 2013 .q. zhang , s. jin , k. k. wong , h. zhu and m. matthaiou , `` power scaling of uplink massive mimo systems with arbitrary - rank channel means , '' _ ieee j. sel .topics in sig . process ._ , vol . 8 , no .966 - 981 , oct . 2014 .q. cao , h. v. zhao , and y. jing , `` power allocation and pricing in multiuser relay networks using stackelberg and bargaining games , '' _ ieee trans . veh .61 , no . 7 , pp . 31773190 , sept .2012 .g. zhu , c. zhong , h. a. suraweera , z. zhang , c. yuen , and r. yin , `` ergodic capacity comparison of different relay precoding schemes in dual - hop af systems with co - channel interference , '' _ ieee trans .wireless commun .2314 - 2328 , july 2014 .g. zhu , c. zhong , h. a. suraweera , z. zhang , and c. yuen , `` outage probability of dual - hop multiple antenna af systems with linear processing in the presence of co - channel interference , '' _ ieee trans .wireless commun .4 , pp . 2308 - 2321 , april 2014 .h. a. suraweera , h. q. ngo , t. q. duong , c. yuen , and e. g. larsson , `` multi - pair amplify - and - forward relaying with very large antenna arrays , '' in _ proc ._ , budapest , june 2013 , pp . 4635 - 4640. x. jia , p. deng , l. yang and h. zhu , `` spectrum and energy efficiencies for multiuser pairs massive mimo systems with full - duplex amplify - and - forward relay , '' _ ieee access _1907 - 1918 , 2015 .t. v. t. le and y. h. kim , `` power and spectral efficiency of multi - pair massive antenna relaying systems with zero - forcing relay beamforming , '' _ ieee commun . letters _ , vol .243 - 246 , feb .y. wang , s. li , c. li , y. huang and l. yang , `` ergodic rate analysis for massive mimo relay systems with multi - pair users under imperfect csi , '' in _ 2015 ieee global conf .on sig . and info. process .( globalsip ) _ , orlando , fl , 2015 , pp .33 - 37 .h. wang , j. ding , j. yang , x. gao and z. ding , `` spectral and energy efficiency for multi - pair massive mimo two - way relaying networks with imperfect csi , '' in _ proc .2015 ieee 82nd veh .( vtc fall ) _ , boston , ma , sept .2015 , pp . 1 - 6 .m. s. alouini , a. abdi , and m. kaveh , `` sum of gamma variates and performance of wireless communication systems over nakagami - fading channels , '' ieee trans .50 , pp .1471 - 1480 , nov . 2001 .b. m. hochwald , t. l. marzetta and v. tarokh , `` multiple - antenna channel hardening and its implications for rate feedback and scheduling , '' _ ieee trans .info . theory _ ,1893 - 1909 , sept . 2004
|
this work provides a comprehensive scaling law and performance analysis for multi - user massive mimo relay networks , where the relay is equipped with massive antennas and uses mrc / mrt for low - complexity processing . csi error is considered . first , a sum - rate lower bound is derived which manifests the effect of system parameters including the numbers of relay antennas and users , the csi quality , and the transmit powers of the sources and the relay . via a general scaling model on the system parameters with respect to the relay antenna number , the asymptotic scaling law of the sinr as a function of the parameter scalings is obtained , which shows quantitatively the tradeoff between the network parameters and their effect on the network performance . in addition , a sufficient condition on the parameter scalings for the sinr to be asymptotically deterministic is given , which covers existing studies on such analysis as special cases . then , the scenario where the sinr increases linearly with the relay antenna number is studied . the sufficient and necessary condition on the parameter scaling for this scenario is proved . it is shown that in this case , the interference power is not asymptotically deterministic , then its distribution is derived , based on which the outage probability and average bit error rate of the relay network are analysed . * index terms : * massive mimo , relay networks , mrc / mrt , scaling law , deterministic equivalence analysis , performance analysis , outage probability , bit error rate .
|
complex dynamical systems can be adequately represented by networks with a diversity of structural and dynamical characteristics , , and .often such networks appear to have multiscale structure with subgraphs of different sizes and topological consistency .some well known examples include gene modules on genetic networks , social community structures , topological clusters or dynamical aggregation on the internet , to mention only a few .it has been understood that in the evolving networks some functional units may have emerged as modules or communities , that can be topologically recognized by better or tighter connections .finding such substructures is therefore of great importance primarily for understanding network s evolution and function . inrecent years great attention has been devoted to the problem of community structure in social and other networks , where the community is topologically defined as a subgraph of nodes with better connection among the members compared with the connections between the subgraphs , and .variety of algorithms have been developed and tested , a comparative analysis of many such algorithms can be found in . mostly such algorithmsare based on the theorem of maximal - flow minimal - cut , where naturally , maximum topological flow falls on the links between the communities .recently a new approach was proposed based on the _ maximum - likelihood _ method , . in maximization likelihood method an assumed _ mixture model _is fit to a given data set . assuming that the network nodes can be split into groups , where group memberships are unknown , then the expectation - maximization algorithm is used in order to find maximum of the likelihood that suites the model . as a resulta set of probabilities that a node belongs to a certain group are obtained .the probabilities corresponding to global maximum of the likelihood are expected to give the best split of the network into given number of groups . in complex dynamical networks ,however , other types of substructures may occur , that are not necessarily related to `` better connectivity '' measure . generally , the substructures may be differentiable with respect certain functional ( dynamical ) constraints , such as limited path length ( or cost ) , weighted subgraphs , or subgraphs that are synchronizable at a given time scale .search for such types of substructures may require new algorithms adapted to the respective dynamical constraints .in this work we adapt the maximum - likelihood methods to study subgraphs with weighted links in real and computer - generated networks .we first introduce a new model to generate a network - of - networks with a controlled subgraph structure and implement the algorithm , , to test its limits and ability to find the _ a priori _ known substructures .we then generalize the algorithm to incorporate the weights on the links and apply it to find the weighted subgraphs on a almost fully connected random graph with known weighted subgraphs and on a real - data network of yeast gene - expression correlations .we introduce an algorithm for network growth with a controlled modularity . as a basis ,we use the model with preferential attachment and preferential rewiring , , which captures the statistical features of the world wide web . two parameters and control the emergent structure of the webgraph when the average number of links per node is fixed .for instance , for : when the emergent structure is a scale - free clustered and correlated network , in particular the case corresponds to the properties measured in the www ; when a scale - free tree structure emerges with the exponents depending on the parameter . herewe generalize the model in a nontrivial manner to permit development of distinct subnetworks or modules .the number of different groups of nodes is controlled by additional parameter .each subgroup evolves according to the rules of webgraph . at each time step add a new node and new links . with probability a new group is started .the added node is assigned current group index .( first node belong to the first group . )the group index plays a crucial role in linking the node to the rest of the network .the links are created by attaching the added node inside the group , with probability , or else rewiring within the entire network .the target node _ k _ is selected preferentially with respect to the current situation in the group , which determines the linking probability .similarly , the node which rewires the link _ n _ is selected according to its current number of outgoing links , which determines the probability : where and are in- or out - degrees of respective nodes at time step , is number of links in whole network , while is number of link between nodes in a group of node .it is assumed that = . suggested rules of linking insure existence of modules in the network .each group has a central node , hub , in terms of in - degree connectivity , and a set of nodes along which it is connected with the other groups .the number of groups in the network depends on number of nodes , _n _ , and the parameter _ _ as .some emergent modular structures are shown in fig .[ fig - graphs - clusters ] . for the purpose of this workwe only mention that the networks grown using the above rules are scale - free with both in - coming and out - going links distributed as ( =``in '' or `` out '' ) : the scaling exponents and vary with the parameters and _ . in fig . [ fig - pq ]we show cumulative distribution of in- and out - links in the case n=25000 nodes , m=5 , and number of groups .the slopes are and . [ cols="^,^ " , ]we have extended the maximum - likelihood - method of community analysis to incorporate multigraphs ( wmlm ) and analysed several types of networks with mesoscopic inhomogeneity .our results show that the extended wmlm can be efficiently applied to search for variety of subgraphs from a clear topological inhomogeneity with network - of - networks structure , on one end , to hidden subgraphs of node with the same strength on the other .+ + * acknowledgments : * research supported in part by national projects p1 - 0044 ( slovenia ) and oi141035 ( serbia ) , bilateral project bi - rs/08 - 09 - 047 and cost - stsm - p10 - 02987 mission .the numerical results were obtained on the aegis e - infrastructure , supported in part by eu fp6 projects egee - ii , see - grid-2 , and cx - cmcs .tadi , b. , rodgers , g.j . , thurner , s. : transport on complex networks : flow , jamming & optimization , int .j. bifurcation and chaos , * 17 * , 2363 - 2385 ( 2007 ) .ravasz , e. , somera , a. , mongru , d.a . ,oltvai , z.n . ,barabsi , a.l . : hierarchical organization of modularity in metabolic networks , science * 297 * , 1551 ( 2002 ) .tadi , b. : dynamics of directed graphs : the world - wide web .physica a * 293 * ( 2001 ) 273 - 284 cho , r.j .al . _ : a genome - wide transcriptional analysis of the mitotic cell cycle , molecular cell * 2 * 65 - 73 ( 1998 ) ; _ http://arep.med.harvard.edu/cgi-bin/expressdbyeas _ ivkovic , j. , tadi , b. , wick , n , thurner , s. : statistical indicators of collective behavior and functional clusters in gene expression network of yeast , european physical journal b * 50 * , 255 ( 2006 ) .
|
real - data networks often appear to have strong modularity , or network - of - networks structure , in which subgraphs of various size and consistency occur . finding the respective subgraph structure is of great importance , in particular for understanding the dynamics on these networks . here we study modular networks using generalized method of maximum likelihood . we first demonstrate how the method works on computer - generated networks with the subgraphs of controlled connection strengths and clustering . we then implement the algorithm which is based on weights of links and show it s efficiency in finding weighted subgraphs on fully connected graph and on real - data network of yeast . networks , subgraphs , maximum likelihood method published : _ lecture notes in computer science _ , vol.5102 , pages 551 , 2008
|
advanced materials utilizing carbon nanotubes ( cnts ) are emerging . recently, we reported a macroscopic fiber composed of tightly packed and well - aligned cnts , which combines specific strength , stiffness , and thermal conductivity of carbon fibers with the specific electrical conductivity of metals ( `` specific '' : normalized by the linear mass density). these macroscopic cnt fibers hold the promise to replace traditional metals for many applications including making stronger and lighter power transmission cables or electronic interconnections, as well as durable field emission or thermionic emission sources. these applications require the fiber to operate under high current , which leads to natural questions about the fiber s ability to carry such a current without being damaged .traditionally , current carrying capacity ( ccc ) , or often called ampacity , is used to quantify this ability .ccc is defined as the maximum amount of current a cable ( including any insulating layer ) can carry before sustaining immediate or progressive damages ; sometimes , it is more convenient to use the current density , especially when making comparisons among different types of cables .also , for weight - critical applications , for instance , in the aerospace industry , specific ccc ( ccc normalized by the linear mass density ) is usually considered .owing to the strong c - c bond , the ccc of individual cnts can exceed 10 a / m without damage by electromigration, which is 2 to 3 orders of magnitude greater than the electromigration limit of copper. however , such superb ccc ( limited by intrinsic optical phonon emission ) becomes unapproachable when many cnts are packed together to form a macroscopic cnt fiber or bundle .the unavoidable inter - tube transport significantly increases the resistivity , and the resultant joule heating at high current densities raises the temperature , inducing damages and ultimately breaking the fiber .thus , the competition between current - induced joule heating and cooling by thermal environments becomes the determinant of the ccc , as in metal cables ; this competition scales with the volume - to - surface ratio , which increases with increasing cable diameter , making joule heating progressively more problematic for larger diameter cables .so far , the most widely studied case for cnt networks is their immediate breakdown ( usually in seconds or less ) when carrying high current .the damage usually initiates around the hottest spot , particularly if associated with defects , kinks , or impurities. the corresponding current limit can be defined as the failure current density ( fcd ) , similar to the fuse current limit for metal cables . on the other hand , to be used as a power cable , cnt wires must operate below a regulations - specified temperature called the `` operating temperature '' ( ) to avoid damaging its own insulation layer or other nearby accessories .the corresponding current limit is defined as the continuous current rating ( ccr). since can not be high enough to cause any damages , ccr is always much lower than the failure current .in contrast to metal power cables , whose ccr is well studied and regulated, so far no systematic study of these quantities for cnt wires is available . here , we determined both fcd and ccr for cnt fibers under various test conditions .we first measured the fcds of those fibers .we monitored how the resistivity of the fiber under test evolved as a function of current density and found four distinct regimes .the measured fcds varied from 10 to 10 a / m , depending on the dimensions of those fibers and test conditions . in particular , the measured fcd in vacuum was much lower than in gases due to a lack of heat exchange by gases , while the measured fcd in air was smaller than in the other tested gases because of oxidation .we then analyzed the heat exchange between cnt fibers and each type of gas and extracted the thermal conductance ( ) between them .we proved that the heat exchange is governed by natural convection .in addition , we showed that due to tight packing and good alignment , gas molecules do not penetrate the body of the cnt fibers .when is known , in principle , any thermally determined ccc can be deduced if the corresponding temperature limit is given , and vice versa . as an example , we determined ccr for infinitely long cnt fibers with an operating temperature of 363 k. based on these measurements and heat - exchange analysis , we were able to make a comprehensive comparison of ccc with other cables .we showed that the fcd of our fibers is higher than previously reported carbon fibers and cnt fibers .we then compared these two parameters with a pure copper wire .both the fcd and ccr of copper were still higher than cnt fibers mainly due to copper s lower resistivity .however , when normalized by the mass density , both specific fc ( sfc ) and ccr of copper wire were lower than those of these lightweight cnt fibers .considering the fact that commercial transmission cables usually require extra reinforcement by a steel core because of copper s heavier weight and lower tensile strength , the combination of higher specific ccc and stronger mechanical strength of cnt fibers makes them promising candidates for transmission cables .cnt fibers were produced by wet spinning. purified cnts were dissolved in chlorosulfonic acid at a concentration of 3 wt and filtered to form a spinnable liquid crystal dope. the dope was then extruded through a spinneret ( 65130 m in diameter for different diameter fibers ) into a coagulant ( acetone or water ) to remove the acid .the forming filament was collected onto a winding drum with a linear velocity higher than the extrusion velocity to enhance the alignment .the produced fibers were further washed in water and dried in an oven at 115 .such fiber is called an acid - doped fiber .tga shows that there still remains about 7 wt of acid residuals in the acid - doped fiber. on the other hand , if the produced fibers were first dried in an oven at 115 and then washed in water , there would be even more acid residuals in the fiber .such fiber is called a heavily acid - doped fiber . the 99.99% pure copper wire with 0.001inch diameter was purchased from espicorp , inc . a copper substrate with a wide river bed and two narrow river bankswas used to hang the cable ( either a cnt fiber or copper wire ) as shown in fig .the depth of the bed was mm and the width mm .the cable was bent into a z - turn with its mid - section suspended over the river bed .each arm was placed on a thin ( 100 m ) electrically - insulating quartz slide , which was itself placed on the river bank .several silver epoxy electrodes ( .5 mm wide ) were placed on the fiber for resistivity measurements . in particular , two electrodes ( denoted by a and b in fig .[ fig4 ] ) were placed at the ends of the suspended portion of the cable .electrode c was placed 2 mm away from electrode a and b , and bc serves as a local probe to monitor the resistivity change at the end of the suspended fiber .the whole device was assembled on a vacuum - sealed heating - cooling stage in which the temperature of the device and the gas environment could be adjusted .the experimental procedure is shown in the flow chart in fig .[ fig5](a ) .first , the sample is uniformly heated up and cooled down by the heating - cooling stage , while the resistivity of of the ab section is measured as a function of temperature [ see fig .[ fig1](a ) ] .then a current sweep is carried out in high vacuum ( 10 torr ) , in one of four dry gases ( nitrogen , helium , argon , and air ) at atmospheric pressure , or on an intrinsic silicon substrate in air . in this step ,the current is gradually swept up to reach the desired current density , held for 30 seconds , and then gradually swept down to 1 ma [ see [ fig5](a ) ] .the experiment is halted if the fiber breaks .if the electrical properties of the fiber are unchanged by current sweeping , the i - v curves from all sweeping cycles show no hysteresis ( fig .[ fig5](a ) , red curve ) , but if they are changed , the i - v curve will initially follow the sweeping - down curve of the previous circle and then start to deviate ( fig .[ fig5](b ) , blue curve ) .since the fiber might not be homogeneously joule heated , the extracted resistivity is an average value given by where , , and are the current density , voltage , and length of the cable with = ab and bc representing the measured section ( see fig .[ fig4 ] ) .figure [ fig1](a ) shows the resistivity versus temperature for the three highly - conductive cnt fibers we studied , together with that for the reference copper wire with a diameter of 25.4 m .the fibers we tested were electrically p - doped by the presence of sulfur and chlorine inside the fiber , which is a residuum from the chlorosulfonic acid solvent used in the fabrication of the fibers .the heavily acid - doped fiber contained more acid , making them less resistive .the room temperature resistivity for the heavily acid - doped fiber was about 2.57 10 m while that of the 10.5 m diameter ( 20.5 m diameter ) acid - doped fiber was 4.12 ( 3.98 ) 10 m .on the other hand , the mass density of the heavily - doped sample was 1.5 10 kg / m , as compared to 1.2 10 kg / m of the other acid - doped fibers .another issue with fiber doping is its stability .we notice that in several cases , excessive acid doping makes the room temperature resistivity of several fibers as low as 1.7 10 m , but at the same time , after annealing at 373 k , this value quickly returned back to about 2.5 10 m .thus , the temperature range in which a fiber can be operated without any irreversible property change must be considered .the room temperature resistivity of copper is approximately one order of magnitude less than that of fibers .its accepted value is 1.725 10 m while the measured value here was 1.74 10 m . in all cases ,the resistivity ( ) linearly increased with temperature ( ) , i.e. , where is the ambient temperature , is the temperature measured from , and is a positive constant .this equation provides us with a convenient means for monitoring the temperature rise as a result of current - induced heating .figure [ fig1](b ) shows the resistivity as a function of current density for the 20- - diameter acid - doped fiber in vacuum for = 303 k. the resistivity is normalized to the initial value , = 3.98 10 m , before the fiber is heated . as the current density increases, the temperature increases through joule heating , which in turn increases the resistivity through fig .[ eq_linear ] . starting from the lowest red curve , after a number of current - sweeping cycles , the resistivity versus current density curve undergoes irreversible changes with uneven paces . to better visualize this process , fig .[ fig1](c ) plots the highest applied current density against the resistivity measured at a very low current density after each sweeping cycle [ along the blue dashed line in fig .[ fig1](b ) ] .interestingly , fig .[ fig1](c ) reveals four distinct regimes as the current density is gradually increased from zero toward the ultimate value at which the fiber eventually breaks . in regime 1 , the i - v curve is reversible , and thus , the resistivity does not change after each sweeping cycle . in regime 2 ,a drastic irreversible process takes place , and the resistivity permanently increases by about 4 times . in regime 3 , the _ i - v curve becomes reversible again _ , showing stable properties of a new , current - annealed fiber ; i.e. , the acid , which is an effective dopant , is removed by heating .finally , in regime 4 , the resistivity starts increasing very rapidly until the fiber breaks .given sufficient time , any current - density value in regime 4 ultimately leads to fiber breaking .we define the current - density value that corresponds to the boundary between regimes 3 and 4 as the failure current density , or fcd , of the fiber , which is different from the maximum current density before breaking ( mcdbb ) .see fig .[ fig1](c ) for the difference between fcd and mcdbb .the value of mcdbb is ill - defined and can have a large uncertainty , depending on such experimental details as the sweeping speed , step size , sweeping method ( current or voltage ) , and sweeping pattern .one the other hand , fcd is determined by the characteristic temperature limits beyond which the quality of the fiber is altered .these temperature limits can be considered intrinsic because they reflect such fiber properties as the defect density , impurities , alignment , etc .therefore , fcd is a better quantity for characterizing the ccc of wires .unfortunately , all previously reported ccc for aligned buckypapers, carbon fibers, and cnt fibers are mcdbb values .the fcd value for the particular case shown in fig .[ fig1](c ) is 1.03 10 a / m , while the mcdbb is .4 10 a / m . the data in fig .[ fig1](c ) provides significant insight into the mechanism by which the fiber leads to a catastrophic failure at high current densities . in regime 2 ,a drastic irreversible process occurs , and the resistivity becomes about four times larger than the original value . based on our modeling ( see appendix for detail ) , the maximum temperature of the fiber at the boundary between regimes 1 and 2 is about 470 k , which is higher than the boiling point of chlorosulfonic acid ( 423 k ) .therefore , we attribute the increase of resistivity in regime 2 to an irreversible reduction of charge carriers through removal of chlorosulfonic acid , a p - type dopant .even though regime 2 is fairly narrow , the increase of resistivity significantly increases joule heating . as a result , the finishing temperature ( at the boundary between regimes 2 and 3 ) is estimated to be k. notably , the quality of the fiber is not degraded during the heating process in regime 2 , as confirmed by the maintained g / d ratio in raman spectra ( shown in appendix ) .the fact that the i - v curve is reversible again in regime 3 indicates that no chemical changes happen in this regime and the fiber is stable even though it is heated by high currents .thus , we are essentially dealing with an annealed fiber in this regime , whose properties are different from those of the original , acid - doped fiber in regime 1 .finally , as the average temperature exceeds k , the fiber enters regime 4 , and the resistivity increases very rapidly , until it breaks .a close examination of curves in regime 4 provides further insight into the final moments when the fiber is breaking apart .note that a typical curve in regime 4 , e.g. , the red dashed curve in fig .[ fig1](b ) , shows a qualitatively different trend than those in the other three regimes ; that is , the _resistivity initially decreases and then increases with increasing current density_. this unusual trend can be explained only if we consider the general temperature dependence of resistivity of these fibers in a wider temperature range .namely , with increasing temperature ( from , e.g. , 4.2 k ) , the resistivity initially decreases due to thermally - driven hopping transport and then increases due to intra - tube phonon - carrier scattering. the crossover temperature ( ) , where the resistivity is minimal , is lower than the ambient temperature in the fibers we study here .this is why fig .[ eq_linear ] holds in the 300 - 360 k range .however , when the fiber starts breaking , the curve itself starts changing irreversibly and dynamically .specifically , as the fiber starts structurally deteriorating , the hopping transport contribution becomes more and more important in determining the resistivity , which pushes higher and higher during the breaking process .the fact that we see an initial decrease in resistivity in regime 4 ( e.g. , the green curve ) is evidence that has already become higher than .this is a self - intensified process because the initial damage forces a higher current through the remaining conductive paths , accelerating the breaking process .the inset of fig .[ fig1](c ) shows that the type of gas surrounding the fiber critically affects the boundaries of different current regimes . in an argon or nitrogen gas environment , our cnt fibers exhibit qualitatively the same behavior as in vacuum , but the boundaries of regimesare shifted to much higher values .this is understandable because the gas contributes convective cooling , whereas in vacuum black - body radiation is essentially the only thermal path , except through the end contacts .on the other hand , fibers break more easily in air , usually breaking already in regime 2 ; i.e. , the fcd value corresponds to the boundary between regimes 1 and 2 .we attribute this reduced fcd in air to the oxidation of carbon nanotubes , which can happen at temperatures between 773 and 873 k. the fcd values measured in different environments are summarized in table [ table_fcd ] ..failure current density ( fcd ) , maximum current density before breaking ( mcdbb ) , and specific failure current ( sfc ) values determined for an acid - doped carbon nanotube fiber with a diameter of 20.5 m , through measurements in vacuum , argon , nitrogen , and air .the values in parentheses are the corresponding values estimated for copper ( see appendix for details on the estimations ) . [ cols="<,^,^,^,^",options="header " , ] both the fcd and mcdbb depend on the dimensions of the cable as well as the surrounding thermal media . in table 1 of ref . , the values of the breakdown current density ( the same as the mcdbb in this paper ) for several suspended buckypapers in air and vacuum are listed .the largest value corresponds to the `` a - bp '' sample , which is 20 m thick and 20 cm long ; these dimensions are comparable to those of our fibers here ( 20 m in diameter and 30 cm in length ) .the mcdbb for this sample is 1.1 10 a / m in air and 3.1 10 a / m in vacuum , both of which are much smaller than the values for our acid - doped fiber ( 2.11 10 a / m in air and 1.36 10 a / m in vacuum ) . in table 1 of ref ., the maximum current values of several cnt fibers as well as a carbon fiber , all laid on substrates , are listed .the lengths of those samples were only about 500 m , and therefore , heat dissipation is not only through the substrate but also through the electrodes ( heat dissipation through the air can be ignored ) .since it is very hard to make identical solid - to - solid interfaces between a fiber and those two media in each test , the measured mcdbb values would contain large test - to - test fluctuations .nevertheless , we tested several acid - doped fibers with 10 m diameter and mm length on substrates ; the maximum current varied from 100 to 125 ma , corresponding to 1.15 10 a / m to 1.44 10 a / m in mcdbb , more than 2 times the value of the fiber with either 9.1 or 13.3 m diameter in ref . .it is also larger than the mcdbb of a pan carbon fiber with 6.1 m diameter ( 0.82 10 a / m ) in ref . .it even exceeds the largest mcdbb ( 1.03 10 a / m ) in ref ., which was for the smallest fibril with 5.6 m diameter .reducing the diameter of the fiber is expected to improve the fcd and mcdbb values because of the enlarged surface - to - volume ratio .namely , the amount of joule heating is , which is proportional to the volume , while heat dissipation is , which is proportional to the surface area .therefore , the maximum current density , which is determined by the balance between the two , should be proportional to ( assuming that the other parameters do not vary with the diameter ) . based on this expectation , we can project a factor of 1.35 improvement in mcdbb when the diameter of the acid - doped fiber decreases from 10.5 to 5.6 m .similarly , we can estimate the mcdbb of acid - doped fibers with diameters ranging from 10.5 to 4.2 m to compare with the fiber reported in the supplementary information of ref .( although the length of the fiber was not specified ) . the projected mcdbb varies from 1.62 10 to 2.25 10 a / m , which are better than the value of 1.62 10 a / m reported in ref . .to demonstrate that the quality of the fiber is not degraded during the irreversible current - heating process in regime 2 , we performed raman spectroscopy before and after going through the heating process .figure [ figs3 ] shows that the g / d ratio is very well maintained at a small value ( 4 - 5 10 ) .
|
we characterize the current - carrying capacity ( ccc ) , or ampacity , of highly - conductive , light , and strong carbon nanotube ( cnt ) fibers by measuring their failure current density ( fcd ) and continuous current rating ( ccr ) values . we show , both experimentally and theoretically , that the ccc of these fibers is determined by the balance between current - induced joule heating and heat exchange with the surroundings . the measured fcd values of the fibers range from 10 to 10 a / m and are generally higher than the previously reported values for aligned buckypapers , carbon fibers , and cnt fibers . to our knowledge , this is the first time the ccr for a cnt fiber has been reported . we demonstrate that the specific ccc ( i.e. , normalized by the linear mass density ) of our cnt fibers are higher than those of copper . = 1
|
the classical multi - armed bandit problem provides an elegant model to study the tradeoff between collecting rewards in the present based on the current state of knowledge ( exploitation ) versus deferring rewards to the future in favor of gaining more knowledge ( exploration ) .specifically , in this model a user has a choice of bandit - arms to play , and at each time step it must decide which arm to play .the expected reward from playing a bandit - arm depends on the state of the bandit - arm where the state represents a `` prior '' belief on the bandit - arm . each time a bandit - arm is played , this priorgets updated according to some transition matrix defined on the state space .for instance , a typical assumption on the bandit - arms is that they have -priors : the success probability of an -bandit - arm is ; in case of a success a reward of 1 is obtained and gets incremented , whereas in case of a failure no reward is obtained and gets incremented .the user wishes to maximize the total expected discounted reward over time .this simple setting effectively models many applications .a canonical example is exploring the effectiveness of different treatments in clinical trials while maximizing the benefit received by patients .the discount factor in a multi - armed bandit problem may be viewed as modulating the horizon over which the strategy explores to identify the bandit - arm with maximum expected reward , before switching to exploitation .this facet of the multi - armed bandit problem is explicitly captured by the _ budgeted learning problem _ , recently studied by guha and munagala .the input to the budgeted learning problem is the same as for the multi - armed bandit problem , except the discount factor is replaced by a horizon .the goal is to identify the bandit - arm with maximum expected reward using at most steps of exploration .the work of gives a constant factor approximation for the budgeted learning problem via a linear programming based approach that determines the allocation of exploration and exploitation budgets across the various arms .the budgeted learning problem is the main object of study in this paper .the multi - armed bandit problem admits an elegant solution : compute a score for each bandit - arm using only the current state of the bandit - arm and the discount factor , _ independent _ of all other bandit - arms in the system , and then play the bandit - arm with the highest score .this score is known as the _ gittins index _ , and many proofs are known to show this is an optimal strategy ( e.g. , see ) .the optimality of this `` index - based '' strategy implies that this problem exhibits a `` separability '' property whereby the optimal decision at each step is obtained by computations performed _ separately _ for each bandit - arm .this structural insight translates into efficient decision making algorithms .in fact , for commonly used prior update rules and discount rates , extensive collections of pre - computed gittins indices exist , enabling in principle , a simple lookup - based approach for optimal decision - making .there are multiple definitions of what it means for a problem to have an `` index '' .we will use the term index in its strongest form , i.e. , where the index of an arm depends _ only on the state of that arm_. this is also sometimes called a decomposable index ( eg . ) .the inherent appeal and efficiency of index - based policies is the unifying theme underlying our work .we show that many interesting and non - trivial variations of the multi - armed bandit problem , including the budgeted learning problem and the finite horizon problem , can all be well - approximated by index - based policies. moreover , our approach gives decision strategies that are _ oblivious _ to parameters such as the underlying horizon or the discount factor while being constant - factor competitive to optimal strategies that are fully aware of these parameters .we will study this problem when the state space of each arm satisfies the `` martingale property '' , i.e. , if we play an arm multiple times , the sequence of expected rewards is a martingale .this is a natural assumption for multi - armed bandit and related problems , e.g. the commonly used priors satisfy this property .[ [ an - index - for - budgeted - learning - problems ] ] an index for budgeted learning problems : + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + our first result is that the budgeted learning problem admits an approximate index , which we call the _ ratio index_. informally speaking , given a single bandit - arm and an exploration budget of steps , the ratio index for that arm is the maximum expected exploitation reward per unit of the exploration and exploitation budget utilized .the ratio index suggests the following natural algorithm : at each step , play the arm with the highest ratio index .we show that this simple greedy algorithm gives a constant factor approximation to the budgeted learning problem .an -approximation algorithm for this problem is already known . however , the algorithm of is based on solving a _ coupled _ lp over all the arms , whereas the ratio index can be computed for each arm in isolation , much like the gittins index .the ratio index has many other interesting properties .for example : + + ( 1 ) we show that the gittins index with discount factor and the ratio index over horizon are within a constant factor of each other .this gives the following surprising result : [ thm : introgittins ] given an exploration budget , playing at each step the arm with the highest gittins index , with discount factor , yields a constant factor approximation to the budgeted learning problem .the proof relies on comparing the `` decision - trees '' of the ratio index and gittins index strategies .even in retrospect , it is not clear to us how such a result could be derived using an lp - based formulation such as the one used by guha and munagala .interestingly , the policy described in theorem [ thm : introgittins ] is known to often work well in practice . nonetheless , before the work of guha and munagala , we do not know of any provable guarantees for polynomial time algorithms in this setting . and until now , we do nt know of any formal guarantees that relate the exponential discounting approach ( which yields the gittins index ) and the budgeted learning approach .+ ( 2 ) the ratio index can be computed in time which is strongly polynomial in the size of the state space ( independent of ) of each arm if the state space is acyclic , and strongly polynomial in the size of the state space and if the state space is general .our proof of this fact involves recursively analyzing the basic feasible solutions of an underlying lp for computing optimum single arm strategies and using the structure of the basic feasible solutions to prove that these strategies have a simple form .[ [ finite - horizon - and - discount - oblivious - multi - armed - bandits ] ] finite horizon and discount - oblivious multi - armed bandits : + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + we next study an important and natural variation of the budgeted learning problem , called the _finite horizon _ multi - armed bandit problem .we are given a finite horizon , and the goal is to maximize the expected reward collected during the horizon .thus , in contrast to the budgeted learning problem , the horizon is being used for both exploration and exploitation , and no payoffs are obtained after time .we show the following result using the ratio index : [ thm : finite_horizon ] there is an index - based policy that gives a constant factor approximation to the finite horizon multi - armed bandit problem .finally , we study the role of the discount factor in the design of an optimal strategy for the exploration - exploitation tradeoff . small variations in discount factors can alter the choice of bandit - arm played at any step , highlighting the sensitivity of the gittins index to the discount rate .we study the `` discount - oblivious '' multi - armed bandit problem where the underlying discount factor is not known , and in fact , may even vary from one time step to the next .a finite horizon problem can be viewed as a special case of this general setting where the discount factor is for the first steps and is for all subsequent steps .there is a useful relationship between the finite horizon and discount oblivious versions of the multi - armed problem : a strategy is -approximate for the discount - oblivious multi - armed bandit problem iff it is -approximate ( _ simultaneously _ ) for all finite horizons . using this connection , and building on theorem [ thm : finite_horizon ] , we show the following result : [ thm : intro1 ] there is an index - based policy that gives a constant factor approximation for the multi - armed bandit problem with respect to all possible discount factors simultaneously .our proof of both of these results is based on the following easy consequence of the ratio index approach to the budgeted learning problem . for any constant ,the expected profit of the optimal -horizon strategy is an -fraction of the expected profit of an optimal -horizon strategy . using this result, we design an algorithm that alternates between budgeted exploration and exploitation , using geometrically increasing horizons ; each increasing horizon competing against a lower discount rate on future rewards .it is worth noting that this result can also be shown using the lp - based proof of guha and munagala . however , the following corollary is a consequence of our index - based approach and the relation between ratio and gittins indices .[ cor : gittins ] the strategy that alternates between exploring the arm with the highest gittins index , and exploiting the arm with the highest reward , in phases of geometrically increasing length ( and discount factor during a phase of length ) provides a constant factor approximation to the multi - armed bandit problem _ simultaneously _ for all finite horizons and for all discount factors .we first show that there exists a single strategy that is an -approximation to the optimum multi - armed bandit strategy , simultaneously for all discount factors .our existence proof is effective , also resulting in an efficient algorithm for solving this problem .the proof involves analyzing finite horizon strategies , which optimize the payoff from a fixed number of steps without a discount factor .in particular , we first show that , for any constant , the expected profit of the optimum -horizon strategy is at least a constant fraction of the expected profit of the optimum -horizon strategy .our proof of this lemma involves first scaling and then rounding a recent lp proposed by guha and munagala for budgeted learning problems , which in turn builds on dean , goemans , and vondrak .we believe this result offers fundamental insight into the multi - armed bandit problem ; we find it surprising that the discount factor makes only a constant factor difference .our result also holds if the discount factor is not constant but varies from step to step .this further supports the intuition mentioned earlier : perhaps the frustration probabilities , discount factors etc are not integral to this problem , at least from an approximations point of view .optimizing our running time and the constant factors in our approximations remain important open problems . however , we believe that it is even more important to gain further insight into the list ranking problems and their relation to each other. in particular , it would be very interesting to obtain an approximate index such that if items are always displayed in decreasing order of this index , we are guaranteed a constant factor approximation regardless of the type of user ; such an index may render the question of efficiency moot .there are many sources for the canonical work on gittins indices , particularly with reference to bandits and bernoulli bandit processes .glazebrook and others have studied approximation algorithms for other extensions to multi - armed bandit problems .their approach builds upon the concept of achievable regions and general conservation laws and a related linear programming approach built by tsoucas , bertsimas , nino - mora , and others .relaxed linear programming based approaches to extensions of the multi - armed bandit problem have also been developed , e.g. for restless bandits .our work on the ratio index builds on the insights obtained from the lp relaxation based approach of guha and munagala as well as related work in model - driven optimization and stochastic packing .additionally , related lp formulations have been developed for multi - stage stochastic optimization .in the theoretical computer science community , multi - armed bandits have primarily been studied in an adversarial setting , with the goal being to minimize the regret ( see for a nice overview ) . a typical guarantee in these settingsis that the _ total regret _after steps grows as where is the number of alternatives , assuming the partial information model ( i.e. only the reward for the alternative that is actually played is revealed ) , which corresponds well to our setting .these results assume no prior beliefs , unlike our decision theoretic framework .however , the regret based bounds in the adversarial setting are meaningless unless .the decision theoretic framework which has a rich history ( starting perhaps with wald s work in 1947 ) is more suited to the situation where the number of exploration steps is drastically limited , as is often the case .a typical setting , for example , is one where an advertiser that can advertise on 100,000 possible phrases and is willing to pay for 100 clicks to decide which keyword attracts visitors that convert into paid customers .so a traditional regret based bound may not be very meaningful in this setting .notably , glazebrook and garbe show that if you assume the added constraint that the transition matrix for each arm is irreducible , then choosing arms according to the gittins index in this environment is within an additive of optimal . in section [ sec : ratio ]we define the budgeted learning problem and the ratio index , and prove that the ratio index is a constant factor approximation to the budgeted learning problem .section [ sec : relate ] establishes that the gittins and ratio indices are constant factor approximations of each other .we also show here that playing the arm with the largest gittins index ( with a suitable discount factor ) , gives a constant factor approximation to the budgeted learning problem .section [ sec : multi - armed ] presents index - based policies for finite horizon and discount oblivious versions of the multi - armed bandit problem . in section [ sec : compute ] , we present a strongly polynomial algorithm to compute the ratio index as well as several useful insights into its structural properties ._ we are given arms .arm has state space , with initial state . experimenting on an arm in state results in the arm entering state with known probability .the payoff of state is given as .given an experimentation budget , we are interested in finding the optimal policy , , so that ] . using the above lemma, we can show the following : [ lem : obliviousr ] for any positive integer , the expected reward of the discount oblivious strategy in the first steps is .[ lem : obliviousg ] for any positive integer , the expected reward of the discount oblivious strategy in the first steps is .invoking lemma [ lem : equivalent - oblivious ] now gives us : [ thm : oblivious ] strategies and both give a constant factor approximation to the multi - armed bandit problem simultaneously for all discount factor sequences .we will now provide the proofs from section [ sec : multi - armed ] .lemma [ lem : fvsr ] consider the fixed horizon strategy ( for horizon ) that first solves the budgeted learning problem with budget and then exploits the winner from this budgeted learning problem for the remaining time steps .this strategy has expected pay off at least and hence , .now consider the budgeted learning strategy ( with budget ) that emulates the optimum fixed horizon strategy ( for horizon ) but only for steps , where is an integer chosen uniformly at random from the set , and then declares the arm that the optimum fixed horizon strategy was about to play in step as the winner .this budgeted learning strategy has expected payoff exactly and hence , theorem [ thm : switchr ] assume , for simplicity , that is even ; the odd case is very similar .now , }\\ & = & \omega(f^*(h ) ) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \mbox{[from lemma~\ref{lem : fvsr } ] } \end{aligned}\ ] ] lemma [ lem : equivalent - oblivious ] the `` only if '' part is trivial since a fixed horizon problem can be modeled using a discount factor sequence , as explained earlier .for the `` if '' part , consider a strategy that offers expected reward at time .then the expected discounted reward , of given a discount factor can be written as .observe that by the definition of a discount factor sequence .if we simultaneously -approximate each of the finite horizon rewards we also simultaneously -approximate any non - negative linear combination , and hence the optimum expected discounted reward for any discount factor sequence .lemma [ lem : obliviousr ] if , simply plays the arm with the highest expected profit , and hence obtains profit . if , it is easy to see that and hence the lemma holds .we will now assume that .let be the largest power of 2 which is no larger than ; hence .since , we have .the strategy is guaranteed to execute the strategy sometime during the first steps ( in fact during steps ) . using lemma [ lem : restart ], we can repeat the steps in the proof of theorem [ thm : switchr ] to claim that = \omega(f^*(t,\mathbf{s_0 } ) ) = \omega(f^*(h,\mathbf{s_0}))$ ] .the proof of lemma [ lem : obliviousg ] is similar to the above proof .we will now sketch how the ratio index can be computed . in the process, we will also get several useful insights into its structural properties . given a single bandit - arm , an initial state for , an exploration budget of , and a state space truncated to depth , we view as a layered dag of depth , which is to say that for any arm - state , , in layer , if , then must be in layer . as explained in section [ sec : compalg ] , this is without loss of generality .we let be the number of nodes in the layered dag .additionally , for any state in , we use to denote the sub - dag of with root ; thus . for the purposes of this section, we require the use of _ randomized single arm policies_. whereas a _ deterministic _ single arm policy ( corresponding to arm - state ) will always either explore , exploit , or abandon with probability 1 , a randomized policy , , selects where represents the probability explores in this state , represents the probability exploits in this state , and represents the probability abandons in this state .the vectors and are defined for randomized policies as for deterministic policies , as are the profit and cost of the policy . our approach below will calculate the ratio index for all as well as the entire _ profit curve _ for all where where the is over all randomized single arm policies with initial state and .we show that there exists a deterministic policy that induces the maximum over all ( and in fact our algorithm will find such a policy ) .thus , the value is the ratio index for given , i.e. , .our algorithm relies heavily upon the following theorem on the structure of the profit curve .[ thm : profitcurvesegments ] the profit curve , , for any given state is concave and piecewise linear with at most segments where represents the number of states in . the proof of this theorem involves several steps and is deferred to appendix [ append : calc ] . towards proving the theorem ,we show that as the budget increases along the profit curve for , a _monotonicity property _ holds that for every state , both and are non - decreasing . [lem : mono ] for any , with , there exist optimal solutions and to and respectively such that and for all in .we further characterize the intersection of line segments of the profit curve as `` corner '' solutions and show that at these points and for _ all _ states in .thus , these points of the curve are induced by deterministic policies .thus , the policy which induces the `` corner '' solution at the end of the first segment of the profit curve is a deterministic ratio index policy .the algorithm for computing the profit curve ( and hence the ratio index ) involves recursively calculating the profit curve for a state given the profit curves for all of its successor states .we begin by constructing an _ exploration profit curve _ for , , which denotes the optimal profit for any given cost conditioned on the fact that we are exploring at ( i.e. ) .we then take the concave envelope over this curve combined with the abandonment policy and the exploitation policy .figure [ fig : curves ] in appendix [ append : calc ] shows a typical example of the relationship between these two curves .superficially , it might seem that the number of segments of the profit curves could increase exponentially as we perform this process up the dag .however , theorem [ thm : profitcurvesegments ] guarantees that the number of segments remains bounded and the entire curve for can be computed in time given the successor curves , where represents the maximum number of immediate descendants for any node .thus , this algorithm is strongly polynomial ( in ) for computing the entire profit curve of a state in the layered dag , and hence , the ratio index .if the underlying state space of the bandit - arm is an unlayered dag , we can make it layered by multiplying the number of states by at most , so the algorithm is still strongly polynomial in .if the underlying state space is not a dag , we can convert it into a layered dag by multiplying the number of states by at most .details of the algorithm and the analysis are in appendix [ append : calc ] .the authors would like to thank rajat bhattacharjee and sudipto guha for helpful discussions , as well as anonymous referees for pointing us to the citations .99 p. bhattacharya , l. georgiadis , and p. tsoucas .extended polymatroids , properties , and optimization . _ integer programming and combinatorial optimization , ipco2 _ , ed .e. bala , g cornnejols , and r. kannan , carnegie - mellon university 298 - 315 , 1992 .r. bellman . a problem in the sequential design of experiments ._ sankhia _ , 16:221 - 229 , 1956 .d. bertsimas and j. nino - mora .conservation laws , extended polymatroids and multi - armed bandit problems . _ mathematics of operations research _ , 21:257 - 306 , 1996 .d. bertsimas and j. nino - mora .restless bandits , linear programming relaxations , and a primal - dual index heuristic . _ operations research _ , 48:80 - 90 , 2000 .a. blum and y. mansour .learning , regret minimization , and equilibria ._ algorithmic game theory _n. nisan , t. roughgarden , e. tardos , and v. vazirani .79 - 102 , 2007 .m. charikar , c. chekuri , and m. pal . sampling bounds for stochastic optimization ._ approx - random _ 257 - 269 , 2005 .b. c. dean , m. x. goemans , and j. vondrak .approximation the stochastic knapsack problem : the benefit of adaptivity . _ focs 04 : proceedings of the 45th annual ieee symposium on foundations of computer science _ , 208 - 217 , 2004 .b. c. dean , m. x. goemans , and j. vondrak .adaptivity and approximation for stochastic packing problems .16th acm - siam symp . on discrete algorithms _ , 395 - 404 , 2005 .e. frostig and g. weiss .four proofs of gittins multiarmed bandit theorem ._ applied probability trust _ ,november 10 , 1999 .gittins and d. m. jones . a dynamic allocation index for the sequential design of experiments ._ progress in statistics : european meeting of statisticians , budapest , 1972 _ , ed .j. ganu , k. sarkadi , and i. vince .241 - 266 , 1974 .j. c. gittins .bandit processes and dynamic allocation indices ._ j royal statistical societe series b _ , 14:148 - 167 , 1979 .j. c. gittins . _ multiarmed bandits allocation indices _ , wiley , new york , 1989 .k. d. glazebrook and r. garbe .almost optimal policies for stochastic systems which almost satisfy conservation laws . _annals of operations research _ , 92:19 - 43 , 1999 . k. d. glazebrook and d. j. wilkinson .index - based policies for discounted multi - armed bandits on parallel machines . _the annals of applied probability _, 10:3:877 - 896 , 2000 .a. goel , s. guha , and k. munagala .asking the right questions : model - driven optimization using prbes .acm symp . on principles of database systems _ , 2006 .a. goel and p. indyk .stochastic load balancing and related problems ._ proc . symp . on foundations of computer science _ , 1999 .s. guha and k. munagala .model driven optimization using adaptive probes .acm - siam symp . on discrete algorithms _ , 2007 .s. guha and k. munagala .approximation algorithms for budgeted learning problems ._ stoc _ , 2007 .j. kleinberg , y. rabini , and e. tardos .allocation bandwidth for bursty connections ._ siam j. comput _30(1 ) , 2000 . d. luenberger . _ linear and nonlinear programming _ , reading , ma , 1984 .o. madani , d. lizotte , and r. greiner . active model selection ._ proceedings of the 20th conference on uncertainty in artificial intelligence _ , 357 - 365 , 2004 .p. rusmevichientong and d. williamson .an adaptive algorithm for selecting profitable keywords for search - based advertising services ._ proceedings of the 7th acm conference on electronic commerce , _ 260 - 269 , 2006 .j. schneider and a. moore .active learning in discrete input spaces ._ proceedings of the 34th interface symposium _ , 2002 .d. shmoys and c. swamy .stochastic optimization is ( almost ) as easy as discrete optimization .45th symp . on foundations of computer science _ 228 - 237 , 2004 .p. tsoucas .the region of achievable performance in a model of klimov .research report rc16543 , ibm t.j .watson research center , yorktown heights , new york , 1991 ._ sequential analysis _, j. wiley & sons , new york , 212p , 1947 .p. whittle .restless bandits : activity allocation in a changing world ._ a celebration of applied probability .j. applied probability _, 25a:287 - 298 , 1988 .we will now provide the missing proofs from section [ sec : ratio ] : theorem [ thm : noindexbudget ] consider three arms with priors : , , .assume the horizon is just 1 .first consider the scenario where arms and are the only ones present .observe that .so if we play arm once and the trial is a failure , arm will still be more profitable than arm .hence , playing arm gives an expected profit of since arm will be chosen as the winner regardless of the outcome .also , so if we play arm once and the trial is a success , arm becomes more profitable than arm .hence , exploring first gives an expected profit of .therefore , if there exists an index for the budgeted learning problem , the index of arm must be higher than that of arm .now consider the scenario where arms are all present . playing arm first gives the same expected profit as before : .if we play arm , and the trial is a failure , arm will be chosen as the winner , giving an expected profit of .hence , arm must have a higher index than arm , which is a contradiction .lemma [ lem : submodular ] without loss of generality , assume that the state space is a tree .define as follows : makes the same choices as for all arm - states which are either explored or exploited by .this ensures that condition 1 in the lemma is satisfied .further , the total cost of on these states is merely the total cost of . for an arm - state that is abandoned by , does the following : 1 .if any ancestor of in gets abandoned by , then abandons .if any ancestor of gets exploited by , then exploits .else , makes the same choice as on and all the descendants of . in order to prove condition 2 in the lemma, we have to bound ( by charging to ) the cost incurred ( by ) in exploring / exploiting those arm - states ( and their descendants ) that are abandoned by . in case ( a ) above , is abandoned and no extra cost is incurred . in case( b ) , let be the ancestor arm - state of that was exploited by .the cost of exploiting is the cost of exploiting times the probability of reaching conditioned on reaching .since the total probability that abandons a descendant of conditioned on reaching is at most 1 , the incremental cost for for all descendants of can be no more than and thus there can be no overcharging . in case ( c ) , the charging is quite straight - forward since just mimics . in order to prove condition 3 in the lemma , consider any arm - state that is exploited by .if that state is exploited by , then it is also exploited by .if that state is explored by , then eventually , must either abandon or exploit a descendant along every path in the state space starting from .descendants that get abandoned by will get exploited by by property ( b ) . by the martingale property, the total profit obtained by from all the descendant states of is the same as the profit obtained by from .if the arm - state is abandoned or not reached by , then mimics according to property . hence , gets at least as much profit as .corollary [ cor : incremental ] let denote the optimum policy . define a new policy that we call the restriction of to arm , denoted , as follows : follows when it explores / exploits arm , and simulates ( without really playing it ) on other states .a simple coupling argument shows that the total expected cost of all these single arm policies is equal to the expected cost of , and the total expected profit of all these single arm policies is equal to the expected profit of .similarly , let denote the greedy algorithm over the first stages , and let denote the restriction of to arm .the following is now immediate : and .hence , there exists some such that .applying lemma [ lem : submodular ] with serving the role of and serving the role of proves the corollary .corollary [ cor : scale ] if we take the integral in the proof of the above theorem with going from to instead of from to , we get an approximation factor of . hence , the optimum reward from the budgeted learning problem with budget is at least times the optimum reward with budget .we will now present the full details of the proof of theorem [ thm : profitcurvesegments ] as well as the full algorithm to compute the profit curves and ratio indices for all states up to depth for a given bandit - arm .below , we prove a series of claims that together imply theorem [ thm : profitcurvesegments ] .we begin by considering two methods of calculating that will be used in our discussions .the first is a recursive equation that can be used to calculate for a given state and budget provided that we have the entire profit curves of all successor states .this equation is is the set of immediate descendants of .the decision variables and represent the probability of exploiting and exploring in respectively ( as in the definition of a randomized policy ) .the vector represents the budgets we would allocate to each of the immediate descendants of should we visit them . recall that is the probability of transitioning to state given we are experimenting in state .we assume if .this is a linear program where each decision variable , , selects some fraction of the segment of the profit curve of . represents the collection of all such variables .as the profit curve is concave( by claim [ app : concaveprofitcurve ] ) , .thus , there exists an optimal solution which only assigns if for any .further , through inspection we can see that the optimal solution for any is to select the segments in order of decreasing slope ( where the slope of the segment of is ) until all budget is exhausted . by ordering segments thus, we can easily construct .the algorithm below orders the segments of the elements of and builds , storing the costs ( ) and budgets allocated to all descendants ( ) for each `` corner '' solution of .+ 1 ) for ll + /*compute profit , cost , and slope for each line segment of */ + set + set + set + 2 ) sort the elements of the form from largest to smallest + let index the node for the largest element in the list + let indicate which segment of this is + /* slope of the line segment in the ordered list.*/ + 3 ) set /*fixed cost*/ , /*initial profit*/ , + /*initial budgets for descendants*/ + 4 ) for = to + /*add next segment to the curve*/ + /*compute current total profit ( and cost ( )*/ + let + let + /*compute budgets allocated to descendants ( ) at the end*/ + /*of each segment ( needed to represent the policy)*/ + let + let + 5 ) set , + 6 ) for = to + /*find changes in slope on the exploration profit curve*/ + if = + + + 7 ) for = to + /*merge together segments of the exploration profit*/ + /*curve with the same slope*/ + , + 8) algorithm computeexplorationprofitcurve represents the exploration profit curve for state by returning the number of line segments of the curve ( not including the zero slope segment from cost of 0 to ) , , as well as the cost ( ) , profit ( ) , and vector of budgets to allocate to all immediate descendants ( ) corresponding to the endpoint of each segment .given that the profit curves of all descendants are concave , the sorting of line segments in step 2 ) equates to simply interleaving the segments of the different states and can be performed in time using a simple min - heap , where is the maximum number of immediate descendants for a node . after sorting these segments , steps 3 ) and 4 )then determine the cost and profit associated with adding each segment to the exploration profit curve .finally , as there may and likely will be duplicates in the sorted list of slopes , steps 5 ) through 7 ) merge all segments of the curve with the same slope . given the exploration profit curve , it is much easier to calculate the profit curve for a state . from lemma [ app : concaveprofitcurve ] , we know that .further , this must be the only `` corner '' solution corresponding to exploiting at ( ) .all other corner solutions must thus correspond to exploring at ( ) and thus must correspond to points on . as we know is concave , we can simply take the concave envelope of the points , , and .the algorithm below does this .+ 1 ) find where + 2a ) if = /*the ratio index policy exploits immediately*/ + set + 2b ) else = /*the ratio index policy explores*/ + set + set . +while = + /*greater marginal return to explore than exploit*/ + + + + + set .algorithm computeprofitcurve runs in time .step 1 ) computes the value for each segment of , which represents to slope of the line segment from the origin to the point . in the event that , the ratio index policy is to exploit immediately and we are done determining the profit curve for ( step 2a ) .otherwise , once we have found the ratio index policy , we continue to look at higher budget exploration policies to determine subsequent segments of the profit curve ( step 2b ) .the quantity represents the budgets allocated to each of the immediate descendants at the end of the segment of the profit curve .these values are only required to represent the actual policy , not calculate the ratio index or profit curve of any state .the quantity is the marginal ratio of transitioning from to exploitation at .once the slopes of the segments of are no larger than this , it is optimal to transition to exploitation at .
|
in the budgeted learning problem , we are allowed to experiment on a set of alternatives ( given a fixed experimentation budget ) with the goal of picking a single alternative with the largest possible expected payoff . approximation algorithms for this problem were developed by guha and munagala by rounding a linear program that couples the various alternatives together . in this paper we present an index for this problem , which we call the ratio index , which also guarantees a constant factor approximation . index - based policies have the advantage that a single number ( i.e. the index ) can be computed for each alternative irrespective of all other alternatives , and the alternative with the highest index is experimented upon . this is analogous to the famous gittins index for the discounted multi - armed bandit problem . the ratio index has several interesting structural properties . first , we show that it can be computed in strongly polynomial time . second , we show that with the appropriate discount factor , the gittins index and our ratio index are constant factor approximations of each other , and hence the gittins index also gives a constant factor approximation to the budgeted learning problem . finally , we show that the ratio index can be used to create an index - based policy that achieves an -approximation for the finite horizon version of the multi - armed bandit problem . moreover , the policy does not require any knowledge of the horizon ( whereas we compare its performance against an optimal strategy that is aware of the horizon ) . this yields the following surprising result : there is an index - based policy that achieves an -approximation for the multi - armed bandit problem , oblivious to the underlying discount factor .
|
one of the main motives behind the recent development of the uppsala quantum chemistry package , uquantchem) , has been to complement the broad selection of quantum chemistry codes available with an `` easy to use '' , open source , development friendly and yet versatile computational framework .the other motive , which has perhaps been the most important driving force , is to provide a pedagogical platform for students and scientists active in the computational chemistry community that are harboring intermediate to basic programming skills , but nevertheless are interested in learning how to implement new computational tools in quantum chemistry .the didactical design of the code has been achieved by limiting the level of optimization , not to obscure the connection between the different quantum chemical methods implemented in the package , and the actual text - book algorithms , upon which the construction of the code rests .the user - friendliness of the uquantchem package has been ascertained by a large set of default values for the computational parameters , in order for the inexperienced user not to get overwhelmed by technical details .furthermore , thanks to the limited number of pre - installed computational libraries required prior to the installation of the uquantchem code ( only the linear algebra package ( lapack) and the basic linear algebra subprograms ( blas) are required ) the package is also very simple to install .the uquatchem code has been written completely in fortran90 and comes in three versions ; a serial version , an openmp version and a mpi version . in the case of the serial and the openmp version ,more or less generic make files are provided for the ifortran and gfortran compilers .the mpi version of the uquantchem code comes with pre - constructed make files for five of the largest computer clusters in sweden , the lindgren cluster , the matter cluster , the triolith cluster , the abisko cluster and the kalkyl cluster .these makefiles can be used as templates to create makefiles for a broad selection of clusters .the wide range of capabilities of the uquantchem package is perhaps best illustrated by the different levels of chemical theory in which the electron correlation can be treated by uquantchem , ranging from hartree - fock and mller plesset second order perturbation theory ( mp2) to configuration interaction , density functional theory ( dft) and diffusion quantum monte carlo ( dqmc) .the uquantchem package provides a platform on which further development can easily be made , since the implementation of the different electronic structure techniques in uquantchem has , to a large extent , been made almost in one to one correspondence with the text books of _ szabo and ostlund _ and _ cook _ , i.e the code has been transparently written and well commented in reference to these texts .the developer friendliness is further enhanced by the explicit calculation of all relevant data structures such as kinetic energy integrals , potential energy integrals and their gradients with respect to electron and nuclear coordinates .furthermore , since the uquantchem is constructed from a very limited number of subroutines and modules , an overview of the data structure and design of the code is easilly achieved , simplifying any future modification of the program .the uquantchem code is a versatile computational package with a number of features useful to any computational chemist .the main ingredient in any quantum chemical calculation is the level of theory in which the correlation of the electrons are treated , here the uquantchem package is no exception .the least computational demanding level of theory explored by the uquantchem code is the hartree - fock level of theory , where the electron correlation is completely ignored . in the context of hartree - fock total energy calculations it is also possible to calculate analytical inter - atomic forces , enabling the user to either relax the molecular structure with respect to the hartree - fock total energy , or perform molecular dynamics ( md ) calculations . herethe user can either choose to run a born - oppenheimer molecular dynamics calculation ( bomd ) , or an extended lagrangian molecular dynamics calculation ( xl - bomd ) , where the density matrix of the next time - step is propagated from the previous time step by means of an auxiliary recursion relation .the advantage of the xl - bomd methodology over the bomd approach is that in the case of xl - bomd , there is no need of a thermostat and an accompanying rescaling of the nuclear velocities in order to suppress any energy drift .o molecule , where the density matrix from the previous time step , is used as the initial guess to the scf optimization with the energy converged to , blue dotted line .the total energy of a extended lagrangian born - oppenheimer molecular dynamics ( xl - bomd) calculation of a h molecule with 5 scf iterations per time step , full black line . ] in figure [ fig : xlbomd ] the results obtained with the uquantchem code for a h molecule using the xl - bomd and bomd schemes without thermostat are shown . herethe superiority of the xl - bomd approach over the bomd scheme is manifested by the lack of energy drift in the former s total energy . on the intermediate level of electron correlation theory implemented in the package one finds mp2 , cisd and dft .when using the dft level of electron correlation it is also possible to calculate analytical interatomic forces , and therefore also perform structural relaxation and molecular dynamics calculations . herethe dft forces are analytical to the extent that the gradients with respect to nuclear coordinates of the exchange correlation energy are calculated as analytical gradients of the quadrature expression used to calculate the exchange correlation energy . the highest level of electron correlation theory possible to utilize within the uquantchem package is dqmc .here it is possible , within the fixed node approximation , to calculate total ground state energies of medium sized molecules taking into account of the correlation energy . in figure[ fig : dqmc ] the estimated charge density of a h molecule is shown , here calculated with dqmc as implemented in uquantchem .time steps with a time step of au , and a cusp corrected cc - pvtz basis .the resulting ground state energy was a.u .the rendering of the charge density map was obtained by using the uquantchem output file ` chargedens.dat ` as input to the matlab script ` chargdens2dim.m ` , which is a script provided with the uquantchem package . ]apart from the different methods involved in dealing with electron correlation implemented in the uquantchem package , a number of other capabilities can also be found within the package . amongst these capabilitiesis the ability to provide graphical information about the highest occupied and the lowest unoccupied orbitals ( homo and lumo ) , deal with charged systems as well as calculating mulliken charges , plot the hartree - fock and kohn - sham orbitals as well as the corresponding charge density , calculate the velocity auto - correlation function and relax the molecular structure with respect to interatomic forces by utilizing a conjugate gradient scheme .however , it should be stressed that the code can only deal with finite systems , thus excluding any calculation of periodic systems .the code utilizes a localized atomic basis set , where each basis function , , is constructed from a contraction of primitive cubic gaussian orbitals here , are the contraction coefficients , are the atomic coordinates at which the basis function is centered , are the primitive gaussian exponents and and integer numbers determining the angular momentum , , of the corresponding basis function . in what followswe will suppress the spin part of the basis functions and assume that the spin degrees of freedom are treated implicitly , i.e have been integrated out .thanks to the use of primitive gaussians almost all the matrices involved in the different implementations , such as the overlap matrix , , the kinetic energy matrix , and the nuclear attraction matrix , defined by have been calculated analytically . here, denotes the atomic numbers of the nuclei .the implementation of the analytic evaluation of the above integrals follows almost exactly the outline given in d. b. cook s book _ handbook of computational chemistry_. in order to enhance the performance of the code , the electron - electron integrals , have been calculated by rys quadrature , even though they can be calculated analytically as is described in cook s book .the exchange correlation energy , and the corresponding exchange correlation matrix elements , \rho(\mathbf{r } ) , \qquad \qquad \qquad \\v^{xc}_{ij } = \int d^{3}\mathbf{r}\phi_{i}(\mathbf{r})\frac{\delta e_{xc}}{\delta \rho}\phi_{j}(\mathbf{r})= \nonumber \qquad \qquad\\ = \int d^{3}\mathbf{r}\phi_{i}(\mathbf{r})\frac{d ( \epsilon_{xc}\rho)}{d \rho}\phi_{j}(\mathbf{r } ) + \tilde{v}^{xc}_{ij } , \qquad \qquad \\ \tilde{v}^{xc}_{ij } = \int \frac{d^{3}\mathbf{r}}{|\nabla\rho|}\big ( \nabla \phi_{i}\cdot\nabla\rho \phi_{j } + \phi_{i}\nabla\rho\cdot \nabla \phi_{j}\big ) \frac{d(\epsilon_{xc}\rho)}{d |\nabla\rho|}\end{aligned}\ ] ] defined through the exchange correlation energy density , ] into the radial integration interval ] , + \rho \equiv \frac{d ( \epsilon_{xc}\rho)}{d ( \nabla\rho)_{x}} ] , + \rho \equiv \frac{d ( \epsilon_{xc}\rho)}{d ( \nabla\rho)_{z}}$ ] , + which were used in the comp .-version of this article m.f .guest , i. j. bush , h.j.j .van dam , p. sherwood , j.m.h .thomas , j.h .van lenthe , r.w.a havenith , j. kendrick , `` the gamess - uk electronic structure package : algorithms , developments and applications '' , molecular physics , * 103 * , 719 - 747 ( 2005 ) .
|
in this paper we present the uppsala quantum chemistry package ( uquantchem ) , a new and versatile computational platform with capabilities ranging from simple hartree - fock calculations to state of the art first principles extended lagrangian born oppenheimer molecular dynamics ( xl - bomd ) and diffusion quantum monte carlo ( dmc ) . the uquantchem package is distributed under the general public license and can be directly downloaded from the code web - site . together with a presentation of the different capabilities of the uquantchem code and a more technical discussion on how these capabilities have been implemented , a presentation of the user - friendly aspect of the package on the basis of the large number of default settings will also be presented . furthermore , since the code has been parallelized within the framework of the message passing interface ( mpi ) , the timing of some benchmark calculations are reported to illustrate how the code scales with the number of computational nodes for different levels of chemical theory .
|
motion coordination of nonlinear multi - agent systems has received an increased interest in the control community due to the potential applications involving groups of robotic systems and autonomous vehicles in general .multi - agent systems control can be formulated as synchronization or consensus problems , where the goal is to drive the networked systems ( or agents ) to a common state using local information exchange .other related problems include flocking , swarming , and formation control of mechanical systems .built around the solutions of the consensus problem of linear multi - agent systems , several coordinated control schemes have been recently developed for second - order nonlinear dynamics , which can describe various mechanical systems , with a particular interest to leaderless synchronization problems , cooperative tracking with full access to the reference trajectory , leader - follower with single leader or multiple leaders , to name only a few .algebraic graph theory , matrix theory , and lyapunov direct method have been shown useful tools to address various problems related to the systems dynamics , such as uncertainties , and the interconnection topology between the team members .in addition , various recent papers address the synchronization problem of nonlinear systems by taking into account delays in the information transfer between agents , which is generally performed using communication channels . in and ,it has been shown that output synchronization of nonlinear passive systems is robust to constant communication delays if the interconnection graph is directed , balanced and strongly connected . a similar property was shown in under unbalanced directed graphs using the contraction theorem . in , a delay - robust control scheme is proposed for relative - degree two nonlinear systems with nonlinear interconnections . with the same assumption on the delays ,adaptive synchronization schemes have been proposed in for networked robotic systems under a directed graph .in addition to constant delays , a virtual systems approach has been suggested in to account for input saturations and to remove the requirements of velocity measurements .control schemes that consider time - varying communication delays have also been proposed for the attitude synchronization of rigid body systems , formation control of unmanned aerial vehicles , and consensus of networked lagrangian systems , yet in the case of undirected interconnection graphs .more recently , a small - gain framework is proposed in for the synchronization of a class of second - order nonlinear systems in the presence of unknown irregular time - varying communication delays under general directed interconnection topologies .one important problem when dealing with second - order nonlinear systems in the presence of communication delays is to achieve position synchronization , _i.e. , _ all positions converge to a common value , with some non - zero final velocity .in fact , in most of the above mentioned synchronization laws with communication delays , a static leader or no leader are assumed and position synchronization is achieved with zero final velocity .the only cases where the final velocities match a non - zero value assume a full access to a reference trajectory or to a leader s states ( position and velocity ) .by full access , it is meant that this information is available to all agents without delays .the main challenge in this case resides in the fact that imposing a non - zero final velocity ultimately requires some information on the delays to achieve position synchronization .in fact , a possible solution to this problem might be to explicitly incorporate the delays in the control algorithms as suggested in for linear second - order multi - agents .this , however , comes with the assumptions of full access to the desired velocity and the communication delays are exactly known .another issue that can be observed in all the aforementioned results is the assumption that information is transmitted continuously between agents .in fact , it is not clear if these results still apply in situations where agents are allowed to communicate with their neighbors only during some disconnected intervals ( or at some instants ) of time .this can be induced by environmental constraints , such as communication obstacles , temporary sensor / communication - link failure , or imposed to the communication process to save energy / communication costs in mobile agents . for linear first - order multi - agent systems , the authors in have proposed a consensus algorithm based on the output of a zero - order - hold system , which is updated at instants when the information is received and admits as input the relative positions of interacting agents . in the presence of sufficiently small constant communication delays and bounded packet dropout , the proposed discontinuous algorithm in achieves consensus provided that self - delays are implemented and the non - zero update period of the zero - order - hold system is small .a similar approach has been applied for double integrators in , where asynchronous and synchronous updates of the zero - order - hold systems have been addressed , respectively , without communication delays . here, synchrony means that all agents receive information at the same instants . in ,a switching algorithm has been proposed for second - order multi - agents in cases where communication between agents is lost during small intervals of time , yet without communication delays .the latter result has been extended to multi - agent systems with general linear dynamics and globally lipschitz nonlinear dynamics , where it has been shown that consensus can be achieved under some conditions on the communication rates and interaction topology . in this paper , we consider the synchronization problem of a class of second - order nonlinear systems with intermittent communication in the presence of communication delays and possible packet loss . here, it is required that all systems achieve position synchronization with some non - zero desired velocity available to only some systems in the group acting as leaders .based on the small - gain approach , we propose a distributed control algorithm that allows agents to communicate with their neighbors only at some irregular discrete time - intervals and achieve our control objective .a discrete - time consensus algorithm is also used to handle the partial access to the desired velocity . in the case where no desired velocity is assigned to the team ,the proposed synchronization algorithm achieves position synchronization with some velocity agreed upon by all agents . in both cases ,it is proved that , under some sufficient conditions , synchronization is achieved in the presence of unknown irregular communication delays and packet loss provided that the interconnection topology between agents is described by a directed graph that contains a spanning tree .the derived conditions impose a maximum allowable interval of time during which a particular agent does not receive information from some or all of its neighbors .this interval , however , can be specified arbitrarily with a choice of the control gains . to illustrate the applicability of the proposed approach , we derive a solution to the above problems in the case of networked lagrangian systems , and simulation results that show the effectiveness of the proposed approach are given .let be a directed graph , with a set of nodes ( or vertices ) , and a set of ordered edges ( pairs of nodes ) .an edge is represented by a directed link ( arc ) leaving node and directed toward node .a directed graph is said to contain a spanning tree if there exists at least one node that has a `` directed path '' to all the other nodes in the graph ; by a directed path ( of length ) from to is meant a sequence of edges in a directed graph of the form , with , where for the nodes are distinct .node is called a root of if it is the root of a directed spanning tree of ; in this case , is said to be rooted at . given two graphs , with the same vertex set , their composition is the graph with the same vertex set where if and only if and for some . composition of any finite number of graphs is defined by induction . in the case where and contain self - links at all nodes ,the edges of and are also edges of . in this case, the definition above also implies that contains a path from to if and only if contains a path from to and contains a path from to . a finite sequence of directed graphs , , , with the same vertex setis jointly rooted if the composition is rooted. an infinite sequence of graphs , , is said to be repeatedly jointly rooted if there exists such that for any the finite sequence , , is jointly rooted .( see for more details on graph composition ) . a weighted directed graph consists of the triplet , where and are , respectively, the sets of nodes and edges defined as above , and is the weighted adjacency matrix defined such that , if , and if . note that thus defined graph does not contain self - links at any node and will have the same properties as the unweighted graph with the same sets of nodes and edges .the laplacian matrix \in\mathbb{r}^{n\times n} ] .[def_iss ] a system of the form is said to be input - to - state stable ( iss ) if there exist , , and can be found in . also , , , where is zero function , for all . ] and , , such that the following inequalities hold along the trajectories of the system for any lebesgue measurable uniformly essentially bounded inputs , : * _ uniform boundedness : _ , we have * _ asymptotic gain : _ in the above definition , denotes the euclidean norm of a vector and , , are called the iss gains .it should be pointed out that for a system of the form , the iss implies the input - to - output stability ( ios ) , which means that there exist and , , , such that the inequality holds for all and . in this case , the function , and , is called the ios gain from the input to the output . in the subsequent analysis , we will mostly deal with the case where the ios gains are linear functions of the form , where ; in this case , we will simply say that the system has linear ios gains .the convergence analysis in this paper is based on the following small - gain theorem .[ theorem001a ] consider a system of the form .suppose the system is ios with linear ios gains .suppose also that each input , , is a lebesgue measurable function satisfying : , for , and } \left| y_i(s)\right| + |\delta_j(t)| , \label{commconstraints0010}\ ] ] for almost all , where , all are lebesgue measurable uniformly bounded nonnegative functions of time , and is an uniformly essentially bounded signal .let , where , , , .if , where is the spectral radius of the matrix , then the trajectories of the system ( [ affine001 ] ) are well defined for all and such that all the outputs , , and all the inputs , , are uniformly bounded .if , in addition , at , , then , as for and . theorem [ theorem001a ] is a version of ( * ? ? ? * theorem 1 ) and is also a special case of the result given in ; in particular , its proof follows the same lines as in the proof in ( * ? ? ?* theorem 1 ) , and hence , is omitted .consider not necessarily identical second - order nonlinear systems ( or agents ) governed by the following dynamics where and are the position - like and velocity - like states , respectively , are the inputs , and . the functions are assumed to be locally lipschitz with respect to their arguments .note that equations can be used to describe the full or part of the dynamics of various physical systems .the systems are interconnected in the sense that some information can be transmitted between agents using communication channels .this interconnection is represented by a directed graph , where is the set of all agents , and an edge indicates that the -th agent _ can _ receive information from the -th agent ; in this case , we say that and are neighbors ( even though the link between them is directed ) . while the interconnection graph is fixed , the information exchange between agents is not continuous but discrete in time and is subject to communication constraints as described in the next subsection . in this paper, we consider the case where the communication between agents is intermittent and is subject to time - varying communication delays , information losses , and blackout intervals .specifically , it is assumed that there exists a strictly increasing and unbounded sequence of time instants , , where is a fixed sampling period common for all agents , such that each agent is allowed to send its information to all or some of its neighbors at instants , .in addition , for each pair , suppose that there exist a sequence of communication delays that take values in such that the information sent by agent at instant can be available to agent starting from the instant .in particular , it is possible that for some , which corresponds to a situation where agent has not sent information at instant to neighbor at all , or the corresponding information was never received possibly due to packet loss in the communication channel. the following assumption is imposed on the communication process between neighboring agents .[ assumptiondelay01 ] for each , there exist numbers , , and an infinite strictly increasing sequence satisfying * , and , , * for each .assumption [ assumptiondelay01 ] essentially means that , for each pair , and per any consecutive sampling instants , there exists at least one sampling instant at which agent has sent information to agent , and this information has been successfully delivered with delay less than or equal to .note that is not an imposed upper bound of the delays ; in particular , the possible case where for some is not excluded .assumption [ assumptiondelay01 ] also implies that , for each pair , the maximal interval between two consecutive instants when agent receives information from agent is less than or equal to it is worth pointing out that , , and are considered common to all for simplicity , and can be seen as the maximum of the corresponding parameters defined for each pair .also , the above assumption does not require that all agents broadcast their information to some or all of their prescribed neighbors at the same instants of time .consider multi - agents interconnected according to and the communication between agents satisfies assumption [ assumptiondelay01 ] .suppose also that a constant desired velocity is available for a subset of agents , called leaders .the rest of the systems ( belonging to the complementary subset ) are referred to as followers .our goal is to design synchronization schemes for the nonlinear multi - agent system such that the following objectives are attained .[ objective1 ] in the case , it is required that , , as for all .[ objective2 ] in the case , it is required that and as for all and for some final velocity . to achievethe above objectives , we adopt an approach that takes its roots from the control of robotic systems and has been recently used to address various synchronization problems of mechanical systems ( see , for instance , ) . to explain this approach ,let be a reference velocity for the -th system , for .equations can be rewritten in the following form , , where is the velocity tracking error .equations describe the dynamics of agents in a multi - agent system with being the reference input , while is a perturbation term with dynamics described by .the synchronization problem can now be solved using a two stages approach described as follows . in the first stage ,the input in is designed to guarantee the convergence of the error signals to zero . in the second stage , appropriate algorithms for designed using the position - like states of the systems such that the trajectories of the dynamic systems satisfy objectives [ objective1 ] or [ objective2 ] . as mentioned in the introduction ,the problem of designing such algorithms for in the presence of the communication constraints described in section [ section_comm ] is yet unsolved , even in the case where no perturbation term exists , _i.e. _ , . in the presence of nonzero signals , the problem becomes more complicated since there typically exists some coupling between the signals and as they both depend on the states of - .note that , the first step , the design of the control law in that guarantees desirable properties of the error signal , for a given and , can be achieved using various existing approaches to tracking control design for nonlinear systems .this step is not addressed in this work ; instead , the following assumption is made .[ designcond ] for each system in and a given reference velocity signals and , there exists a static or dynamic tracking control law such that the following hold : * the error signal is uniformly bounded ; * if and are globally uniformly bounded , then as . under assumption [ designcond ] , the synchronization problem of the nonlinear multi - agent system is reduced to the design of the reference velocities , , such that is well defined ( available for feedback ) and the trajectories of satisfy objectives [ objective1 ] or [ objective2 ] .in this section , we present a method for the design of the reference velocities , , such that objectives [ objective1 ] and [ objective2 ] are achieved using intermittent communication between the agents in the presence of time - varying communication delays and information losses . for this, we let , for each and each , denote the information that can be transmitted from agent to agent at . specifically , ] is the most recent information of agent that is already delivered to agent at , _ i.e. , _ note that the number can be determined by a simple comparison of the received time stamps .now , for each , the reference velocity is designed in the following form , where is a sufficiently smooth estimate of the desired velocity available for the -th agent , and is a synchronization term designed with the purpose of position synchronization between agents .the design of and are addressed below in detail . in this subsection ,we present a method for the design of in .as explained above , each leader has direct access to the desired velocity .the followers , on the other hand , do not have direct access to the desired velocity ; instead , they estimate it through the following discrete - time consensus algorithm that is updated at instants , for , where in the above algorithm - , , with and denotes the number of elements in . recall from that the vector is the most recent desired velocity estimate ( obtained by agent ) that is already available to agent at instant . as such , the set denotes the set of the neighbors of the -th follower such that the most recent data from these neighbors has been received during the interval ] be an arbitrary weighted adjacency matrix ( defined in section [ section_graph ] ) assigned to the graph ; the resulting weighted directed graph is denoted by .let denote the set of neighbors of agent in .also , let and denote the subset of all agents that have at least one incoming link in .consider the following design of the synchronization term in for , where , are strictly positive scalar gains , is defined in - , and the vector , for all , is an estimate of the current position of the -th agent defined using the most recent information available to the -th agent at as with being defined in . note that , due to the intermittent and delayed nature of the communication process , we have considered a dynamic design for the synchronization terms .this guarantees that , which is difficult to realize using static synchronization terms in view of the irregularities of the information received by each agent .in addition , the vectors are designed as in - such that the closed loop system with and - for each agent is ios with arbitrary ios gains . as will be made clear in the next subsection , by employing the small gain theorem , theorem [ theorem001a ], we will show that our control objectives are achieved under some conditions that can be always satisfied .the following theorem describes the conditions under which objectives [ objective1 ] and [ objective2 ] are achieved .[ theorem2 ] consider the network of systems , where the interconnection topology is described by a directed graph and the communication process between the systems satisfies assumption [ assumptiondelay01 ] .suppose that each system is controlled by a control law satisfying assumption [ designcond ] , where the corresponding reference velocity is generated by , with - and - .let the control gains be selected such that where is defined by , and where , are the roots of . then , for arbitrary initial conditions , we have * objective [ objective1 ] is achieved if contains a spanning tree with a root .* objective [ objective2 ] is achieved if contains a spanning tree .first , it should be noted from , - , and - that , and can be obtained from the solution of the dynamic systems and , and is available for feedback .therefore , applying the control law that satisfies assumption [ designcond ] in guarantees that the velocity tracking error is uniformly bounded . for each , let using the relation with , one can write for , with we can verify that each system - with output is ios with respect to the input vectors and .this follows by noticing that the following estimates }\left| \varepsilon_i(\varsigma ) \right| + \sup\limits_{\varsigma\in[t_0 , t]}\left| \phi_i(\varsigma ) \right|,\ ] ] }\left| \tilde{\psi}_{i } ( \varsigma ) \right|\nonumber\\ + & ~\frac{1}{\mu_i } \sup\limits_{\varsigma\in[t_0 , t]}\left| \varepsilon_{i } ( \varsigma ) \right| , \end{aligned}\ ] ] hold for all , where is defined in theorem [ theorem2 ] .more precisely , inequality indicates that system is iss with respect to the inputs and , with unity iss gains .also , implies that - is iss with respect to the inputs and , with iss gains both equal to .since the cascade connection between two iss systems is iss , we conclude that - is iss . as a result ,the system - with output is ios with respect to the inputs and with linear ios gains equal to and , respectively .therefore , all the systems - , for , can be regarded as a system with outputs , given by , , and inputs that can be ordered as : , , for , where denotes the number of elements in . from the above analysis, we can conclude that such system is ios , with ios gain matrix given by moreover , using - with - and the relation for , one can write } \left| \eta_j ( \varsigma)\right|+ |\delta_{2i}(t)| , \end{aligned}\ ] ] where we used assumption [ assumptiondelay01 ] and to conclude that , the set is used here due to , and } \left| \bar{v}_{d_j}(\varsigma ) - \hat{v}_{d_j}(k_{ij}(t))\right|\nonumber\\ & + \left| e_i(t ) \right| + \sum_{j \in\mathcal{n}_i } \frac{a_{ij}\cdot h^*}{\kappa_i } \sup\limits_{\varsigma\in[k_{ij}(t)t , t ] } \left| e_j(\varsigma ) \right| .\end{aligned}\ ] ] therefore , one can conclude that the input vectors , , satisfy the conditions of theorem [ theorem001a ] , where the elements of the interconnection matrix are obtained as and where and , for , satisfy - .note that in view of assumption [ designcond ] and the result of proposition [ prop21 ] , we have , are uniformly bounded .therefore , the elements of the closed - loop gain matrix in theorem [ theorem001a ] can be written as taking into account the fact that and the elements of are nonnegative , one can conclude using gersgorin disk theorem that if , for . noting that and , the condition is satisfied by .therefore , all the conditions of theorem [ theorem001a ] are satisfied and one can conclude that , and , for , are uniformly bounded .in addition , the iss property of - guarantees that , and , for , are uniformly bounded . this with the result of proposition [ prop21 ] and lead to the conclusion that and , , are uniformly bounded , and hence assumption [ designcond ] guarantees that as , for . furthermore , using the result of proposition [ prop21 ] and the fact that as , it can be verified from - that : at , if is rooted at , or is rooted in the case .consequently , one can conclude from theorem [ theorem001a ] that , , for as .since as , for , the result of proposition [ prop21 ] implies that as if is rooted at .the same proposition implies that as , for some , if is rooted and . in addition , since system - is iss , we have and as for . using , one gets is uniformly bounded and as for all . this with the fact that for lead to the conclusion that as , where is the laplacian matrix of the interconnection graph , is the identity matrix , is the vector containing all for , and is the kronecker product . finally , since implies that if contains a spanning tree , we conclude that as , for all if is rooted .the proof is complete .theorem [ theorem2 ] gives a solution to the synchronization problem of the class of nonlinear systems with relaxed communication requirements .in fact , each agent needs to send its information to its prescribed neighbors only at some instants of time .this information transfer is also subject to constraints inherent to the communication channels such as irregular communication delays and packet loss .an important feature of the above result is that it gives sufficient conditions for synchronization , given in , that are topology - independent and can be easily satisfied with an appropriate choice of the control gains .notice that the constant can be easily estimated in practice , and is simply defined as the maximum blackout interval of time an individual agent does not receive information from each one of its neighbors .then , the control gains , namely and , can be freely selected to satisfy ; in particular , the variable can be made arbitrarily large , which is advantageous in the case where the parameter is roughly or over estimated . on the other hand , condition is equivalent to , which specifies the maximal allowable time interval during which each agent can run its control algorithm without receiving new information from its neighbors .this allowable interval of time does not rely on some centralized information on the interconnection topology between the systems and can be made arbitrarily large .it should be also pointed out that the results of theorem [ theorem2 ] are obtained under mild assumptions on the interconnection graph . in thisregard , note that condition is imposed for all agents , where , in view of the assumptions on , the set contains at most one element .the term in can be selected in different ways using the estimates of the desired velocity of the -th agent .the choices other than include : and in view of proposition [ prop21 ] , any of the choices , - can be used for our purposes . the control scheme in theorem [ theorem2 ]can be applied to the case where the desired velocity is available to all systems , _i.e. , _ . in this case , the observer - is not needed and the following result , which can be shown following similar arguments as the proof of theorem [ theorem2 ] , is valid . [ cor1 ]consider the network of -systems , where the interconnection topology is described by a directed graph and the communication process between the systems satisfies assumption [ assumptiondelay01 ] .suppose that each system is controlled by a control law satisfying assumption [ designcond ] , where the corresponding reference velocity is defined in , where , , and is obtained from - with , , and is given in .let the control gains satisfy .then , and as for all and for arbitrary initial conditions if contains a spanning tree .in this section , we apply the proposed approach to the class of fully - actuated heterogeneous euler - lagrange systems .the systems dynamics are given by for , where is the vector of generalized configuration coordinates , is the vector of torques associated with the system , , , and are the inertia matrix , the vector of centrifugal / coriolis forces , and the vector of potential forces , respectively .the inertia matrices are symmetrical and positive definite uniformly with respect to .other common properties of euler - lagrange systems ( [ model_case1 ] ) are as follows : * the matrix is skew symmetric .* there exists such that holds for all .in addition , and are bounded uniformly with respect to . *each system in admits a linear parametrization of the form , where is a known regressor matrix and is the constant vector of the system s parameters .we assume that the systems are subject to model uncertainties ; the parameters in p.3 are unknown .we aim to achieve objectives [ objective1 ] and [ objective2 ] for the euler - lagrange systems under a directed interconnection graph and the communication constraints described in section [ section_comm ] . for this purpose , we consider the following control input in for , where the matrix is symmetric positive definite, is defined in p.3 , is an estimate of the parameters , , and with where , the control gains are defined as in theorem [ theorem2 ] , is defined in , is obtained from - with the discrete - time observer where is given in and is defined after .then , the following result is valid .[ cor_el ] consider the network of euler - lagrange systems interconnected according to and suppose that assumption [ assumptiondelay01 ] holds .for each system , let the control input be given in with - , and suppose condition is satisfied .then , objective 1 and objective 2 are achieved under the conditions on the interconnection graph given in theorem [ theorem2 ] .the proof of this result follows from theorem [ theorem2 ] by noting that and is well defined , and the control law - is the standard adaptive control scheme proposed in for euler - lagrange systems that satisfies assumption [ designcond ] .in fact , using the lyapunov function , with , one can show that and , leading to the first point in assumption [ designcond ] . also , properties p.2 and p.3 guarantee that if .then , invoking barblat lemma , one can conclude that if are uniformly bounded , then as , which is the second point in assumption [ designcond ] .the control scheme in corollary [ cor_el ] extends the relevant literature dealing with euler - lagrange systems with communication constraints ( * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* for instance ) to the case where a non - zero final velocity is assigned to the team , the communication between agents is intermittent and subject to varying delays and possible packet loss , and under a directed interconnection graph that contains a spanning tree .in addition , this control scheme extends the work in to the case of intermittent and delayed communication without using a centralized information on the interconnection topology .note that in , objective [ objective2 ] is achieved , in the case of delay - free continuous - time communication between agents , under some topology - dependent conditions .we provide in this section simulation results for the example in section [ app_example ] . specifically , we consider a network of ten lagrangian systems ; , modeled by equations with , , , and where , the variables , , are given as : , , , , and , with , , , and . the parametrization satisfying property p.3 for each systemis given as : \in\mathbb{r}^{2\times 5} ] .this information is then delayed by ~\mathrm{sec}$ ] and considered as received by agent . due to the random choice of ,a simple logic is implemented to avoid sending information at a future with the same .this way , the parameter is estimated to be , the variable , for all , and the set can be easily obtained , the information of agent at some instants are lost ( not submitted ) , and the information received by agent is randomly delayed .the intermittent nature of the communication process as well as varying communication delays and packet dropout are illustrated in fig .[ fig : sine : test ] , which shows the received discrete - time signal when the signal is sent according to the communication process described above .we implement the control scheme developed for euler - lagrange systems in section [ app_example ] .first , we consider the case where , which indicates that the systems labeled and are the only systems having access to the desired velocity given by .the observer - is updated at , and the control gains are selected as : , , , , . note that this choice of the gains satisfies condition with .the weights of the communication links of , which is the same as with assigned weights on its links , are set such that .[ fig : positions : leaders ] and fig .[ fig : velocities : leaders ] illustrate the relative positions and relative velocities defined as , for , and for , where subscript ` ' is used for the desired velocity .it is clear that all agents synchronize their positions and velocities with the desired velocity .the output of the discrete - time observer is given in fig .[ fig : velocity : estimates : leaders ] , with , where it can be seen that the desired velocity estimate of each agent converges to the desired velocity available to the leader agents . .] . ] . ]next , we consider the case where . using the same above control parameters , the obtained results are shown in fig . [fig : positions : leaderless]-[fig : velocity : estimates : leaderless ] where is defined for .these figures show that all systems synchronize their positions , and their velocities converge to the final velocity dictated by the output of the discrete - time velocity estimator . .] . ]we addressed the synchronization problem of second - order nonlinear multi - agent systems interconnected under directed graphs . using the small - gain framework , we proposed a distributed control algorithm that achieves position synchronization in the presence of communication constraints .in contrast to the available relevant literature , the proposed approach guarantees that all agents velocities match a desired velocity available to only some leaders ( or a final velocity agreed upon by all agents in the leaderless case ) , while information exchange between neighboring agents is allowed at irregular discrete time - intervals in the presence of irregular time - delays and possible packet loss .in fact , we proved that synchronization is still achieved even if each agent in the team runs its control algorithm without receiving any information from its neighbors during some allowable intervals of time .the conditions for synchronization derived in this paper can be satisfied by an appropriate choice of the control gains .future research will consider the extension of this work to the case of variable desired velocity .consider the consensus algorithm - . the interaction between the agents in the system - is described by a directed graph , which can formally be obtained from the graph by modifying some of the its links , as follows : ( 1 ) removing the incoming arcs to each leader node ( or agent ) , ( 2 ) adding a directed link from any leader node to any other leader node , and ( 3 ) adding a self arc to each node in the graph .it is straightforward to verify that , if the directed graph is rooted at , then is also rooted at . in the case of no leaders ( ) , the above modifications reduce to adding a self arc to each node ; in this case , it is trivial that is rooted if is rooted . in view of the above discussion ,the consensus algorithm - can be formally written as for all , where and is a delay that takes some integer value at and , in view of assumption [ assumptiondelay01 ] and , satisfies for all .note that , and for all if .let , where only if , which defines the set of those agents whose information is used in the update rule of agent at instants .it is clear that is a directed graph with at most one directed link connecting each ordered pair of distinct nodes and with exactly one self arc at each node . according to theorem 2 in ,the states of satisfy exponentially as , , for some , if the sequence of graphs is repeatedly jointly rooted .we claim that the latter condition is satisfied under the assumptions of proposition [ prop21 ] . to show this , pick an arbitrary and consider the composition of graphs . since contains self arcs on each node , for all , the edges of , , are also edges in .consider first the case where and is rooted at .equation implies that contains a directed link from any leader node to any other leader node , for all .in addition , in view of the definition of , assumption [ assumptiondelay01 ] implies that for each and , , the information is successfully delivered to agent at least once per sampling periods .therefore , it can be verified that , for each , if is an edge in , then is also an edge in the composition of graphs .in fact , if , the definition of implies that is an edge in at least one of the graphs , , , .consequently , one can conclude that all the edges of are also edges in the composition of graphs , and therefore , is rooted at since is rooted at . as a result, the sequence of graphs is repeatedly jointly rooted .similar arguments can be used to show that is rooted in the case where and contains a spanning tree , and hence rooted .now , since the states of the leaders are fixed , _ i.e. , _ for all and , one can conclude that in the case where .the rest of the proof follows in view of the dynamics , for , which can be rewritten as , , and describe the dynamics of an asymptotically stable system with an exponentially convergent perturbation term .j. mei , w. ren , and g. ma , `` distributed coordinated tracking with a dynamic leader for multiple euler - lagrange systems , '' _ ieee transactions on automatic control _ , vol .56 , no . 6 , pp .14151421 , 2011 .g. chen and f. l. lewis , `` distributed adaptive tracking control for synchronization of unknown networked lagrangian systems , '' _ systems , man , and cybernetics , part b : cybernetics , ieee transactions on _ , vol .41 , no . 3 , pp .805816 , 2011 .z. meng , d. v. dimarogonas , and k. h. johansson , `` leader follower coordinated tracking of multiple heterogeneous lagrange systems using continuous control , '' _ ieee transcations on robotics _30 , no . 3 , pp .739745 , 2014 .j. mei , w. ren , j. chen , and g. ma , `` distributed adaptive coordination for multiple lagrangian systems under a directed graph without using neighbors velocity information , '' _ automatica _ , vol .49 , pp . 17231731 , 2013 .u. mnz , a. papachristodoulou , and f. allgwer , `` robust consensus controller design for nonlinear relative degree two multi - agent systems with communication constraints , '' _ ieee transactions on automatic control _56 , no . 1 ,pp . 145151 , 2011 .e. nuo , r. ortega , l. basaez , and d. hill , `` synchronization of networks of nonidentical euler - lagrange systems with uncertain parameters and communication delays , '' _ ieee transactions on automatic control _56 , no . 4 , pp . 935941 , 2011 .a. abdessameud , a. tayebi , and i. g. polushin , `` attitude synchronization of multiple rigid bodies with communication delays , '' _ ieee transactions on automatic control _ ,57 , no . 9 , pp . 24052411 , 2012 .a. abdessameud , i. g. polushin , and a. tayebi , `` synchronization of lagrangian systems with irregular communication delays , '' _ ieee transactions on automatic control _59 , no . 1 ,pp . 187193 , 2014 .u. mnz , a. papachristodoulou , and f. allgwer , `` delay dependent rendezvous and flocking of large scale multi - agent systems with communication delays , '' in _ proc . of the 47th ieee conference on decision and control , 2008_.1em plus 0.5em minus 0.4emieee , 2008 , pp .20382043 .g. wen , z. duan , w. ren , and g. chen , `` distributed consensus of multi - agent systems with general linear node dynamics and intermittent communications , '' _ international journal of robust and nonlinear control _ , 2013 .g. wen , z. duan , z. li , and g. chen , `` consensus and its -gain performance of multi - agent systems with intermittent information transmissions , '' _ international journal of control _ , vol .85 , no . 4 , pp . 384396 , 2012 .m. cao , a. s. morse , and b. anderson , `` reaching a consensus in a dynamically changing environment : a graphical approach , '' _ siam journal on control and optimization _47 , no . 2 ,pp . 575600 , 2008 .m. cao , a. s. morse , and b. anderson , `` reaching a consensus in a dynamically changing environment : convergence rates , measurement delays , and asynchronous events , '' _ siam journal of control and optimization _ ,47 , no . 2 ,pp . 601623 , 2008 .
|
this paper studies the synchronization problem of second - order nonlinear multi - agent systems with intermittent communication in the presence of irregular communication delays and possible information loss . the control objective is to steer all systems positions to a common position with a prescribed desired velocity available to only some leaders . based on the small - gain framework , we propose a synchronization scheme relying on an intermittent information exchange protocol in the presence of time delays and possible packet dropout . we show that our control objectives are achieved with a simple selection of the control gains provided that the directed graph , describing the interconnection between all systems ( or agents ) , contains a spanning tree . the example of euler - lagrange systems is considered to illustrate the application and effectiveness of the proposed approach .
|
in many machine learning ( ml ) applications , one needs to compute the expected value of a quantity . in continuous spaces, this means estimating the value of an integral of the form = { { \hat{f}}}= \int f(x)p(x)\,dx\ ] ] where is the function of interest , and is the probability of the input . often in mlcontexts this integral is not known analytically , and so must be estimated from a finite set of samples of the function which requires an estimator algorithm for mapping that set to an estimate of the integral .when the samples are generated deterministically as in quadrature the quality of the estimator is just its error .when the samples are instead generated randomly as in monte carlo ( mc ) algorithms the quality of the estimator can be quantified as its expected squared error .one way to reduce the error of an estimator based on mc samples is by exploiting some side information .an important example of such information is a `` control variate '' , which is a function with known mean whose behavior is correlated with .the difference between the sample mean of the control variate and its actual mean provides information about the difference between the sample mean of and _ its _ true mean information which can then be exploited to correct that sample mean to provide a better estimate of the integral .unfortunately , most problems of interest do not have a natural control variate to perform such regularization .the core idea of the stacked monte carlo ( stackmc ) algorithm is to _ construct _ a control variate by training a supervised learning algorithm on the available data samples .the supervised learning algorithm is chosen such that the expected value of the fit can be found analytically ( or cheaply through sampling ) .then , the fit is evaluated on held - out data samples , and the discovered performance is used to estimate the quality of the control variate .the original presentation of stackmc tested the algorithm under simple sampling , finding stackmc has expected error at least as low as the better of mc and the fitting algorithm alone . herewe provide a deeper theoretical understanding of stackmc and extend stackmc to more sophisticated mc techniques .first , we review the original presentation of stackmc and compare the estimation procedure with other algorithms .we examine some of the implicit assumptions , and find an improved estimator for the quality of the fit .we then present modifications that extend the algorithm to new regimes of interest , specifically when samples are generated from quasi - monte carlo , when samples are generated from importance sampling , and finally when the input domain is discrete instead of continuous .we find that with the appropriate modifications , the estimate under stackmc has error at least as low as the monte carlo estimate in almost all cases , and in most cases the stackmc error is also at least as low as the fitting algorithm used alone .the estimation error of an arbitrary estimator , ] , and is the bias of the estimator , ] , is the error under the sampling procedure . if is chosen such that > \alpha^2 e\left [ \left({{\tilde{g}}}- { { \hat{g}}}\right)^2 \right]\ ] ] then the error of this estimation procedure is lower than the original monte carlo estimate .this illuminates the importance of choosing properly .it is not the case that setting to a fixed constant , say , necessarily leads to variance reduction .in fact , a value of that is too large will actually _ increase _ the variance of the estimator .the value for which minimizes the variance is found by taking the derivative with respect to alpha and setting it to zero .the second derivative is always positive , so this estimator for is the unique global minimizer of : }{e\left [ \left({{\tilde{g}}}- { { \hat{g}}}\right)^2 \right ] } = \frac{cov({{\tilde{g}}}- { { \hat{g } } } , { { \tilde{f } } } ) + b_g b_f}{var({{\tilde{g}}}- { { \hat{g } } } ) + { b_g}^2}\end{aligned}\ ] ] where ] .this optimal value for can not be estimated from the samples , as there is only data from 1 -fold partition . while one could make several -fold partitions ,the estimate of is the same for all of them .however , one can approximate this equation by using the per - fold information by taking as simply the expected value for the fold , and estimate and using only the held - in samples .furthermore , it is impossible to directly estimate since of course is unknown , but many sampling distributions are known to be unbiased , and so . with these approximations ,the optimal estimator for becomes this equation for is , in general , different than , the estimator used in .these two estimates become the same when : 1 ) ( leave - one - out ) , 2 ) , and 3 ) is constant .the first assumption is reasonable , as it gives data points with which to compute instead of only .these latter assumptions characterize the difference between the two estimators . if the held - out samples are not an unbiased estimator of the mean of the fit , then .( this case is discussed further below . )when , this third assumption is similar to the statement , i.e. , the covariation of the fit sample mean is much larger than the covariation of the mean of the fit .this is a reasonable assumption when the number of samples is large , since the fit should stabilize as the number of samples grows large .however , if the sample size is small , this assumption may not hold . performance could be improved by using instead of the original estimator , or perhaps with the additional regularization of setting if it is known to be true .this equation for can be inserted back into to find the expected error reduction given optimal estimation for . performing this substitution one finds & = e[({{\tilde{f}}}- { { \hat{f}}})^2 ] - \frac{e \left [ \left ( { { \tilde{g}}}- { { \hat{g}}}\right ) \left ( { { \tilde{f}}}- { { \hat{f}}}\right ) \right]^2}{e \left [ \left ( { { \tilde{g}}}- { { \hat{g}}}\right)^2 \right ] } \ ] ] having the optimal value of guarantees a reduction in expected squared error , as the monte carlo error is reduced by a strictly positive term .this equation is similar to the standard control variates result , but , again , differs in that the individual estimators and may be biased , and may fluctuate .if , and , then the standard control variates formula is recovered : = \left ( 1 - \rho^2 \right ) var({{\tilde{f}}})\ ] ] where is the correlation between and .this highlights that a good fitting algorithm will produce a high correlation between and , while keeping the variation in the mean across fits small .as mentioned , this new estimate of is likely to be better when there is large variation of across the folds .this is likely to happen when the sample size is only slightly larger than the number of free parameters .we test this hypothesis by constructing a simple example .the function of interest is a simple quadratic , , and the sampling distribution is uniform over the unit interval .the fitting algorithm is a linear fit to the held - in data samples .the results of this test case are shown in fig.[fig : updatedalpha ] , and they confirm the hypothesis . at very small numbers of sample points ,there is significant improvement by including the fluctuations of in the estimate of .as the number of samples grows , this effect becomes negligible , and the two estimators have equivalent performance .frequently , one may consider multiple different fitting algorithms to train on the data . instead of choosing among them, they may all be used together to jointly reduce the estimation error .given a set of supervised learning algorithms , can be written as which introduces some number of control variates , .similar to the analysis above , the expected error of this estimator is - 2 \sum_i \alpha_i e[({{\tilde{g}}}_i - { { \hat{g}}}_i)({{\tilde{f}}}- { { \hat{f } } } ) ] + \sum_i \sum_j \alpha_i \alpha_j e[({{\tilde{g}}}_i - { { \hat{g}}}_i)({{\tilde{g}}}_j - { { \hat{g}}}_j)]\end{aligned}\ ] ] let ] , so that becomes = e[({{\tilde{f}}}-{{\hat{f}}})^2 ] - 2 u^t \boldsymbol{\alpha } + \boldsymbol{\alpha}^t w \boldsymbol{\alpha}\end{aligned}\ ] ] this is quadratic in , with minimizer .the estimation error using all of the fits is & = e[({{\tilde{f}}}-{{\hat{f}}})^2 ] - 2 u^t w^{-1 } u + ( w^{-1 } u)^t w ( w^{-1 } u ) \\ & = e[({{\tilde{f}}}-{{\hat{f}}})^2 ] - u^t w^{-1 } u\end{aligned}\ ] ] the total error reduction from the joint set of fitting algorithms depends on how correlated the fits are with one another and the true function samples .consider the case where all fitters have equal covariance with the function samples , , and equal variances , .further , assume the off - diagonal terms are represented by . in the ideal case ,the fitters are uncorrelated , and the total variance reduction is . on the other hand ,if all of the fitters are perfectly correlated with one another , , and the error reduction is just , i.e. , there is no extra variance reduction from incorporating multiple fits .this shows that a set of fitting algorithms should be sought that each have strong coupling with the true function but weak coupling amongst themselves . in theory , adding a new fitting algorithm should always be beneficial ( or at least not harmful ) , though in practice errors in the estimation of could lead to degraded performance . as a test for multiple fitting algorithms, we chose the rosenbrock function ] , which are zero under iid sampling , are non - zero on a per - fold basis with quasi - monte carlo sampling .this problem can be mitigated by removing the correlation between held - in and held - out samples . to keep the expected error small ,this must be done without introducing significant variance .one algorithm that meets these criteria first partitions the data into folds , and then replaces the held - in samples by doing a bootstap without replacement from the full dataset .this removes much of the correlation between the held - in and held - out samples . in theory ,one may be able to reduce correlations further by performing a bootstrap with replacement , but in practice this can cause numerical issues when too many of the same sample are in the held - in set . in practice , this estimator still has relatively high variance due to the in - sample procedure .this can be mitigated by performing this partition / bootstrap procedure several times and setting as the average over all of the runs .we tested this bootstrap procedure on two different cases . in both 10-d rosenbrock . in the first ,samples are generated with latin - hypercube sampling and a uniform distribution on [ -3 , 3 ] in each dimension . in the second case data drawn from the scrambled halton sequence according to a gaussian with in every dimension .different draws from the halton sequence were done with an initial random burn - in length .the results are seen in fig.[fig : rosenuniflatin ] and fig.[fig : rosengausshalton ] . in both of these example cases, it is seen that the normal stackmc algorithm has much higher expected error than the monte carlo samples . in fig .[ fig : rosengausshalton ] , in fact , we see that this performance gap persists even as the number of samples grows . in both of the examples ,this bootstrap procedure reduces the estimation error , and when different samplings are combined , we see that an error at least as low as monte carlo is recovered . for certain regimes in fig . [fig : rosengausshalton ] , the error is in fact lower than the monte carlo error and the direct fit error .if the sample data are generated via importance sampling , the original stackmc equation no longer holds .the control variates equation is re - written as where is the importance sampling distribution .the natural choice for the target of the supervised learning algorithm is no longer the true function values themselves , but instead the function values scaled by the importance sampling correction factor .like above , we consider the 10-rosenbrock and a uniform distribution , but this time the samples are generated under the importance sampling distribution $ ] .this distribution grows towards the edge of the hypercube like the rosenbrock function itself . in high dimensions , however , the polynomial fit can not be integrated analytically under the distribution as there are an exponential number of terms in the product .the fit must be estimated using monte carlo instead .this estimate is performed using samples generated from , representing a case where samples of are moderately expensive to produce .the results from this experiment can be seen in fig .[ fig : rosenunif_impsampquad30d ] .it is seen that this high sampling error causes the fit estimation error to be quite high , and yet in the medium range of samples stackmc has lower error than mc .in addition to demonstrating is - stackmc , this also highlights that does not need to be estimated exactly . in some cases , one wants to use mc to estimate a sum instead of an integral , for example when is a boolean string , . a fitting algorithm for this spaceis found by noting that any function of a -dimensional bit string can be represented as this is essentially a discrete fourier transform of using an orthonormal basis of the space given by the walsh functions .an approximation to can be created by truncating this expansion and only considering contributions from a subset of the walsh functions .this learning algorithm is a fast off the shelf " learner of functions from bit strings to the reals . as an example , we take to be the the four peaks function of baluja and caruana , and to be uniform .the fit ignores all terms with three or more components , and sets the rest using least - squares .this fitting algorithm has very nice property that its expected value is simply as all of the other terms have 0 expectation .the results from this experiment are shown in fig .[ fig : twinpeaks_walsh ] .we see that not only does stackmc have the the lowest squared error for all sample sizes , but for a range of sample sizes the error is much lower than either mc or the walsh fit on their own .stackmc uses supervised learning to construct a control variate from mc samples and then uses stacking to reduce the resultant overfitting problem .it is a purely post - processing technique , applicable to any mc estimator , and can significantly reduce the variance of monte carlo integral estimators without adding bias , thereby significantly improving accuracy .here we derive expressions for stackmc s expected error , and use it to motivate a new estimator of stackmc s parameter , which can reduce stackmc s error in extreme cases .we also extend stackmc to incorporate multiple control variates ; to use data generated with importance sampling and with quasi - monte carlo ; and to categorical sample spaces .we present experiments verifying the power of these extensions : applying them to data generated by an mc algorithm never results in higher error than the uncorrected mc estimate , never results in higher error than the fitter ( except for very small data sets ) , and frequently significantly outperforms both .we also find that stackmc may be more flexible than previously appreciated ; our results suggest that obtaining an accurate estimate of , the integral of the fit , may not be necessary just to improve over the original mc estimate .there are several major areas of future research .we intend to investigate the use of stackmc for mcmc methods .( mcmc methods create correlations among the samples similar to those of quasi - monte carlo , and so we may need to adapt stackmc for mcmc similarly to how we did it for quasi - mc . ) we also intend to extend stackmc to cases where the probability distribution defining the desired expectation value is unknown .( in theory , a control variate could be used to estimate the probability distribution itself , and stackmc style techniques applied with that control variate . )t. gunter , m. a. osborne , r. garnett , p. hennig , and s. j. roberts . sampling for inference in probabilistic models with fast bayesian quadrature . in _ advances in neural information processing systems _ ,pages 27892797 , 2014 .g. r. ravanbakhsh siamak , poczos barnabas .a cross entropy optimization method for partially decomposable problems . in d.p. m. fox , editor , _ proceedings of the twenty - fourth aaai conference on artificial intelligence. special track on ai and bioinformatics _ , pages 12801286 , atlanta , usa , july 11 15 2010 .aaai press .p. smyth and d. wolpert .stacked density estimation . in _ proceedings of the 1997 conference on advances in neural information processing systems 10_ , nips 97 , pages 668674 , cambridge , ma , usa , 1998 . mit press .
|
monte carlo ( mc ) sampling algorithms are an extremely widely - used technique to estimate expectations of functions , especially in high dimensions . control variates are a very powerful technique to reduce the error of such estimates , but in their conventional form rely on having an accurate approximation of , _ a priori_. stacked monte carlo ( stackmc ) is a recently introduced technique designed to overcome this limitation by fitting a control variate to the data samples themselves . done naively , forming a control variate to the data would result in overfitting , typically _ worsening _ the mc algorithm s performance . stackmc uses in - sample / out - sample techniques to remove this overfitting . crucially , it is a post - processing technique , requiring no additional samples , and can be applied to data generated by _ any _ mc estimator . our preliminary experiments demonstrated that stackmc improved the estimates of expectations when it was used to post - process samples produces by a simple sampling " mc estimator . here we substantially extend this earlier work . we provide an in - depth analysis of the stackmc algorithm , which we use to construct an improved version of the original algorithm , with lower estimation error . we then perform experiments of stackmc on several additional kinds of mc estimators , demonstrating improved performance when the samples are generated via importance sampling , latin - hypercube sampling and quasi - monte carlo sampling . we also show how to extend stackmc to combine multiple fitting functions , and how to apply it to discrete input spaces .
|
any statistical analysis of real systems is based on the ergodic principle for the microscopic dynamics , that implies the relaxation towards steady states and the independence property of elementary components . even if the existence of microscopic interactions is necessary for the system to evolve toward a statistical equilibrium , in this state any particle moves independently from the others and all the particles are statistically equivalent( any particle may be representative for the whole ) .the thermodynamics laws that are derived from a statistical mechanics approach , concern some macroscopic observables of the system , evolving adiabatically with respect the microscopic relaxation time ( i.e. we can consider the whole system in a almost equilibrium state ) , so that the effects of single particle dynamics are conveniently described by means of stochastic processes . as a consequence ,there should exist a natural separation among macroscopic and microscopic space - time scales .indeed space and time scales are expected to be strictly correlated : to understand small scale phenomena we need to solve short time scales and viceversa .nevertheless the statistical mechanics has a great success in describing evolution of macroscopic systems and there is a strong effort to generalize the results for a non - equilibrium thermodynamics and for application to complex systems .the statistical properties of social systems have been recently considered under a different point of view due to the possibility of recording large microscopic data sets .the main problem is what are the macroscopic effects of cognitive behavior for `` social particles '' .indeed the cognitive behavior would imply the existence of strong bidirectional interactions among the dynamics at different space and time scales of the system .emergence and self - organization characterize the macroscopic states , but the question is which macroscopic observables ( if they exist ) may enrol the complex nature of the system .these variables may also play an important role in the study of phase transitions and in the control parameters definition . in italy gps data on individual vehicle pathsare currently recorded for insurance reasons over a sample of the whole private vehicle population .this data set gives the opportunity to study the individual mobility demand in urban contexts .the gps data set contains the geographical coordinates , the time , the instantaneous velocity and the path length of individual trajectories at positions whose relative distance is of order km .special signals are recorded when the engine is switched on and off .we remark that the data refer mainly to the private transportation mobility and that , due to privacy legal problems , we do not have any knowledge on the social features of individuals in the sample . in this paperwe analyze the statistical distributions of the path lengths of individual trajectories , of the activity downtime and the distribution of the monthly activity degree .our aim is to point out the main macroscopic features of urban mobility studying their correlation with the idea of `` asystematic mobility '' recently proposed by sociologists to explain the observational data in modern metropolis .we consider gps data recorded during march 2008 in the florence urban area .we show that some simple assumptions on single particles , like the existence of an `` individual mobility energy '' and of an `` individual mobility time '' that define the daily agenda , may explain the statistical laws emerging from the gps data .moreover in the equilibrium state individuals seem to minimize their interactions , behaving independently , so that the maximum entropy principle of statistical mechanics can be applied .these results are consistent with the idea that the sprawling phenomenon of modern cities implies that citizens move as stochastic particles .finally our analysis enlightens some average cognitive properties of individuals in urban mobility .the paper is organized as follows : in the first section we shortly described the gps data base for vehicle mobility ; in the remaining sections we discuss the three statistical laws on path lengths distribution , on the activities downtime and on the activity degree that are inferred from the data .a sample of of the private vehicles in italy has a gps system for insurance reason .any vehicle is associated to an i d number , so that it is possible to follow its mobility during a long time .each datum gives position , velocity , covered distance from the previous measure and quality of signal .the data give a sampling of individual trajectories each km , but a signal is also recorded any time the engine is switched on or off .the data suffer from the gps limited precision , in particular when the gps looses the satellite signal .these problems are especially relevant at starting points of the trajectories or when vehicles are parked inside a building , and short paths could be strongly affected by these pathologies . when the quality of signal is good the time precision of the recorded data is practically perfect , whereas the space precision is of the order of , usually sufficient to localize a vehicle on the road .both the instantaneous velocity and the covered space , are given with an adequate precision since they result from a calculation based on gps data recorded each second , but not registered .we have developed several methods to clean the data from spurious effects in order to avoid passible bias in the sample . in the present work we consider the gps data in the florence urban area recorded during march 2008 : these data are related to 35,000 vehicles in a circular area of radius km , around the historical center and defining different trajectories .we have restricted our analysis to the trajectories which start inside a circle of 10 km around the historical town and remain inside the considered area , so that with a good probability we select people living and moving in florence. then we look for the vehicles which perform daily loops from starting points that we identify as `` home '' : in this case the number of trajectories reduces to . in figure [ figure1 ] we show the considered area where we have plotted the aggregate position gps data : the color refers to the different instantaneous velocities ( red means less than km / h , whereas yellow refers to a velocity in the interval km / h and the green to a velocity km / h ). we are completely ignorant on the social composition of the sample and on the specific drivers , but we expect that such individuals perform a mobility related to the activities present in the florence area and have a certain knowledge of the road network .the activity sprawling that characterizes the modern metropolis has certainly a strong influence on the individual mobility demand . even if the florence historical center is a very special area full of artistic and tourist attractions , but forbidden to private traffic, nevertheless we assume the activities randomly distributed in the urban system .this hypothesis is quite reasonable because we consider a large urban area and our sample is surely composed by inhabitants and not tourists . as a consequence, we expect that the citizen mobility agenda are influenced by individual features rather than by the city structure . in citiesthe stationary average traffic state should emerge as the result of individual interactions of cognitive particles which share the same spatial resources .in particular we assume that drivers organize their mobility , by applying a minimization strategy of the interactions with other individuals . in this conceptual framework ,the mobility can be seen as the realization of many independent individual agenda and the dynamical properties become similar to that of a boltzmann gas . even if it is obviously true that individuals are non - identical particles , path length and activity downtime can be considered common mobility features to all people , and good candidates for a statistical physics approach to describe the stationary state . for each vehiclewe have recorded the total lengths of his daily round trips for the whole considered period .the length distribution is plotted in fig .[ figure2 ] where we point out the existence of an interpolation with a maxwell - boltzmann distribution with km the characteristic daily path length .the distribution ( [ ( 1.1 ) ] ) provides a very good fit of the experimental data , and it can be justified by the maximum entropy principle under the assumption that the individuals are independent particles and that there exists an average daily trip length in the population ( see supplementary material ) . in such a casethe distribution ( [ ( 1.1 ) ] ) is realized when any particle chooses its mobility energy randomly .it is straightforward to associate an individual `` mobility energy '' to the daily path lengths .the assumption that citizens organize their mobility as they own an internal `` mobility energy '' , agrees with similar hypotheses discussed by r.kolb and d.helbing to explain the daily travel - time distributions for different transport modes . in order to investigate the concept of the mobility energy together with the maximum entropy principle, we consider the relation between the daily path length and the single trip path length , building up the rank distribution of individual daily activities .to define an activity from gps mobility data , we apply a clustering procedure to vehicle stop positions , identifying the positions which lie in a circle of diameter m ( this is considered a acceptable distance between the true destination and the parking place ) .moreover we have associated an activity when the elapsed time before the next trip is greater than 15 minutes . in figure [ figure3 ]we plot the rank distribution for the daily activities together with an exponential interpolation that provides a very good fit of experimental observations .the equation ( [ ( 2.1 ) ] ) is consistent with the assumption that , in average , the individuals behave as independent random particles which define their daily agenda in a random way . according to previous hypotheses , it is possible to compute in analytical way the single trip length distribution , as the distribution realized by uniformly spreading points into a given segment of length . a simple calculation provides the single trip length distribution in the form ( see appendix ) where is a normalizing factor and is the maximum number of daily activities ; we remark that the choice of the points in the segment is contextual without any time - ordering .it is quite natural to assume that there should exist a correlation between the number of daily activities and the daily mobility length , but the gps data do not suggest any correlation function , so that we decided to use an effective daily length in the theoretical distribution ( [ ( 2.2 ) ] ) to make a comparison with the empirical one .it turns out that if we exclude the very short paths , the curve ( [ ( 2.2 ) ] ) fits very well with the experimental data ( see fig .[ figure4 ] ) with an effective daily mobility length km ( the result is not too sensitive to this particular value and we have not performed any optimization procedure ) . as a final remark from the figure [ figure4 ], we observe that the long trip length distribution differs from an exponential behavior and it rather seems to follow a power law .time is the second fundamental individual variable of human mobility , directly related to the dynamical realization of daily agenda . from the gps data base in the florence areawe have computed the downtime spent in each daily activity by a fixed individual ; we discard from the activities the sleeping time linked to circadian rhythms . the distribution of the activity downtimes of the recorded individuals is plotted in fig .[ figure5 ] , recovering the well known benford s law . in order to give a microscopical interpretation of the empirical downtime distribution, we assume that individuals can not determine a priori each activity downtime , because this is varied depending on unpredictable circumstances . according to this hypothesis , in the average, each particle has a finite mobility time at disposal to perform the desired activities , and he consumes the time in successive random choices up to the end of the whole mobility time .if one computes the interval distribution that is obtained by choosing successively points in a given segment , one get analytically the benford sdistribution shortly , the statistical results of the monthly mobility in the florence area , recorded by the gps data on vehicles , suggest that the macroscopic average properties are the same of those of boltzmann s particles moving in a homogeneous space with an average energy and a finite time at disposal .the energy introduces a global constraint in the individual daily mobility , whereas the time can be seen as a local constraint in the activity planning since it is consumed step by step .however computing the distribution of the total activity downtime for the monitored vehicles , we can get an idea of the typology of urban mobility described by our sample .the results are plotted in fig .[ figure6 ] we remark the presence of two peaks : one is related to short downtime activities , that probably corresponds to a specific use of the vehicle for a single trip , whereas the second peak centered at h denotes people performing a more complex mobility agenda , which contains the working activities . finally the presence of a long queue for the daily activity downtime ( h ) is probably due to business vehicles in our sample , that are used by different people .both the boltzmann distribution for the mobility energy and the benford s law for the activity downtime enroll stochastic features of the system , but they do not explain how such features can be related to the individual daily agenda , that are certainly the result of a cognitive behavior . in order to study this question , we perform a statistical analysis of the downtime related to daily activities , considering the monthly degree for the different individual activities ( i.e. the number of times that a citizen repeats a certain activity during a month) . let the activity downtime , we introduce the join probability to denote the probability of finding a -degree activity associated to a downtime .then by definition we have to recover the benford s law ( [ ( 3.1 ) ] ) by summing over we have also the equality where is the conditional probability for a downtime considering only the -degree activities , and is the probability to detect a degree activity ; the factor takes into account the multiplicity of the degree activities .the study of the conditional probability can shed some light to understand the mobility habits related to the use of private vehicles and to face the question of the relevance of repeated activities both in the mobility and in the use of time .remarkably the experimental observation suggest the existence of an universal probability distribution for the normalized downtime : where is the average downtime for the -degree activities .we read this universal function as the signature of the fact that individuals organize their time , when performing a private car mobility , in a common way independently from the specific activity , i.e. the relative downtime fluctuations are the result of a stochastic universal mechanism . moreover there should exist a common feature among the individuals , concerning how they manage the downtime related to the -degree activities , since only the average value characterizes the dependence of the conditional probability .this universal character could be explained thinking that the variable is a generic `` measure '' of the mobility actions , valid for every individual .more precisely can be considered the temporal norm for all the mobility related activities . from the empirical data we detect activity downtimes and we have computed the dependence of the average value using the degree .it is evident from fig .[ figure7 ] that we have an almost linearly increasing behavior of as the degree increases .this means the existence of a relation between the activity degree and the activity `` use value '' ( individual satisfaction , profit , etc ... ) that introduces an individual tendency to repeat and to spend time in the activities with a relevant added value .a possible local interpolation of the empirical data is obtained by using the function ( continuous line in fig .[ figure7 ] ) where and . in figures[ figure8],[figure9 ] and [ figure10 ] we plot the empirical probability densities for different degree ( from to ) to investigate the existence of the universal distribution .there is a decreasing of the data number as increases , but all the distributions are computed with a sample of the same order ( from to ) .the figures enlighten three different features .there is a collapse of all the curves on a unique distribution : this is clear in the figure [ figure8 ] ( the tail spread is consistent with statistical fluctuations ) and in the first part of all the plotted distributions that contains the great majority of the data .all the distributions show a big contribution from the short times activities and a fast decaying tail for large .there is a smooth rise of a `` signal '' as increases denoted by the appearance of two peaks at : this is clear in the last figure [ figure10 ] .therefore the empirical observation gives a strong indication for the existence of an universal distribution for the normalized activity downtime , even if when we consider high degree activities ( ) some new features appear but with a small statistical weight .a possible interpolation of the distribution is given by where the coefficient has a value .the distribution ( [ univ ] ) is singular at the origin so that the interpolation is certainly approximated at ( see fig .[ figure3a ] in the appendix ) .the exponential decay is the typical boltzmann statistics as for the path length distribution , whereas the behavior is consistent with the benford s law for the downtime distribution . so that this `` universal distribution '' seems to mix both the features shown in fig .[ figure4 ] and [ figure5 ] . to explain the singular trend at the origin, we guess that it can be a sign of a strong free - will in the individual behavior in the short time activity range .surely for a more precise justification further study are required .we can use the interpolation ( [ univ ] ) to extract the signal from the high degree activity distribution , by computing the ratio between the empirical distribution data and the interpolation ( [ inter ] ) ; in the figure [ figure11 ] we plot the ratio distribution results .as it can be seen , the empirical data define two peaks centered at hours and at that are common to all the distributions when .even if these peaks are not statistically relevant , they can be related to individuals that perform repeated activities linked to the canonical working time schedule .clearly the working time schedule introduces further constrains to the individual mobility agenda , that are not taken into account by the universal distribution .now the existence of an universal distribution implies ( cfr .( [ cond ] ) ) then using the interpolation ( [ inter ] ) and performing the change of variable , we obtain that the benford s law ( [ ( 5.1 ) ] ) implies a power law distribution for the activity degrees(see appendix ) according to the estimate ( [ inter ] ) , we expect an exponent . in fig .[ figure12 ] we plot the empirical activity degree distribution with a numerical interpolation by a power law ; the data provide which is consistent with the analytical estimate ( [ degree ] ) .the citizens mobility is an interesting social phenomenon that involves a large number of `` intelligent '' elementary components with a free will individual property , so that we can speak of a cognitive dynamics . in this paperwe analyze a lot of car mobility data for the florence area , showing the emergence of three robust statistical laws for the path lengths , the activity downtime and degree .these laws can be explained as the direct consequence of some simple hypotheses as the particle independence , and above all assuming the existence of an individual mobility energy and of a finite individual time spent for the desired urban activities .moreover , from the downtime distribution we deduce an universal function which fits with empirical observations and data .we think that this universal function can be interpreted as an indication of the cognitive time perception common to all human beings , or at least surely common to the car drivers .finally the urban mobility is clearly complex , but the our steady state statistics can not point out the typical complexity signatures as , for instance , self - organized states . to detect complexity in mobility dynamics , it is necessary to investigate the transients , i.e. the states far from equilibrium .we thank octo telematics for the access to the gps data base on the florence area .the authors are in debt with prof .dirk helbing and prof .luciano pietronero for several stimulating discussions. 10 r. balescu _ equilibrium and nonequilibrium statistical mechanics _ , new york , wiley - interscience , ( 1975 ) .d. brockmann , l. hufnagel3 and t. geisel _ the scaling laws of human travel _ nature , * 439 * , ( 2006 ) , pp .462 - 465 .gonzlez1 , c.a .hidalgo and a.l .barabsi _ understanding individual human mobility patterns _ nature , * 453 * , ( 5 june 2008 ) , pp 779 - 782 . t.van gelder , _ the dynamical hypothesis in cognitive science _ , behavioral and brain sciences * 21 * , ( 1998 ) , pp .615 - 628 .a. bazzani , s. rambaldi , b. giorgini and l. giovannini _ mobility in modern cities : looking for physical laws _eccs07 conference proceedings , n. 132 , ( 2007 ) .landau , e.m .lifshitz _ statistical physics - course of theoretical physics _ * 5 * , third edition , butterworth - heinemann , ( 1980 ) m. batty _ cities and complexity _ the mit press , cambridge , massachusetts ( 2005 ) .t. domencich and d.l .mcfadden _ urban travel demand : a behavioral analysis _north - holland publishing co. , ( 1975 ) .i. volkov , j.r .banavar , s.p . hubbell and a.maritan _ infering species interactions in tropical forests _ pnas , * 106 - 33 * , ( 2009 ) , pp .13854 - 13859 .r. klb and d. helbing _ energy laws in human travel behaviour _ , new j. phys . * 5 * , ( 2003 ) pp.48.1 - 48.12 .i. benenson , k. martens _ from modeling parking search to establishing urban policy _ kunstliche intelligenz , * 3 * , ( 2008 ) , pp.3 - 8 .l. pietronero , e. tosatti , v. tosatti and a. vespignani , _ explaining the uneven distribution of numbers in nature : the laws of benford and zipf _ physica a : statistical mechanics and its applications , * 293 * ( 12 ) , ( 2001 ) , pp .297 - 304 .s. schnfelder and k.w .axhausen _ structure and innovation of human activity space _arbeitsbericht verkerhrs und raumplanung 258 , ivt and eth zurich , ( 2004 ) .s. samuelson and w. d. nordhaus _economics 17th edition _, mcgraw - hill , ( 2004 ) .let us consider stochastic variables uniformly distributed in the unit segment , the probability that a segment of length is empty can be estimated according as a consequence the probability density that a certain segment is empty is given by therefore if one choices randomly an integer number in the interval $ ] , the probability density for a segment of length conditioned by the choice is \label{pk}\ ] ] since we have to take into account possible segments .the probability ( [ pk ] ) has to be weighted by the probability to have points so that the probability density to detect a segment of length for any choice is where we have introduced a normalizing factor . in fig .[ figure2a ] we show the comparison between the equation ( [ a1 ] ) and a montecarlo distribution with .the empirical conditional distributions for different degrees as a function of the normalized activity downtime suggests that the statistically relevant can be described according to where we introduce an universal function which can be interpolated by the results are shown in figure [ figure3a ] for the activity degrees .there is a strict relation between the activity degree distribution ( see fig .[ figure12 ] in the paper ) and the existence of an universal distribution probability in eq .( [ a5 ] ) indeed taking advantage from the dependence of on the degree pointed out by experimental observations ( see fig .[ figure7 ] in the paper ) we perform the change of variables in the join probability distribution of degree and downtime ( cfr .( 9 ) ) . using the definition ( 5 ) , we get the new distribution where has to be read in the r.h.s . and is the activity degree distribution . in the previous formulawe approximate interpolate the discrete variable with a continuous variable . by integrating of have to recover the benford s law for the global activity downtime distribution ( see fig .[ figure5 ] in the paper ) .since is normalized as probability distribution , this is possible if according to the interpolation of the experimental data as shown in the figure [ figure7 ] in the paper , we explicitly have therefore the condition ( [ a6 ] ) reads i.e. a power law distribution of the activity degree with exponent .this is consistent with the experimental observations as shown by the figure [ figure12 ] in the paper .
|
the application of statistical physics to social systems is mainly related to the search for macroscopic laws , that can be derived from experimental data averaged in time or space , assuming the system in a steady state . one of the major goals would be to find a connection between the statistical laws to the microscopic properties : for example to understand the nature of the microscopic interactions or to point out the existence of interaction networks . the probability theory suggests the existence of few classes of stationary distributions in the thermodynamics limit , so that the question is if a statistical physics approach could be able to enroll the complex nature of the social systems . we have analyzed a large gps data base for single vehicle mobility in the florence urban area , obtaining statistical laws for path lengths , for activity downtimes and for activity degrees . we show also that simple generic assumptions on the microscopic behavior could explain the existence of stationary macroscopic laws , with an universal function describing the distribution . our conclusion is that understanding the system complexity requires dynamical data - base for the microscopic evolution , that allow to solve both small space and time scales in order to study the transients . , , , +
|
increasing the capacity of a wireless communication system has always been a focus of the wireless communications research community .it is well - known that polarisation diversity can be exploited to mitigate the multipath effect to maintain a reliable communication link with an acceptable quality of service ( qos ) , where a pair of antennas with orthogonal polaristion directions is employed at both the transmitter and the receiver sides .however , the traditional diversity scheme aims to achieve a single reliable channel link between the transmitter and the receiver , while the same information is transmitted at the same frequency but with different polarisations , i.e. two channels .this is not an effective use of the precious spectrum resources as the two channels could be used to transmit different data streams simultaneously .for example , we can design a four - dimensional ( 4-d ) modulation scheme across the two polarisation diversity channels using a quaternion - valued representation , as proposed in . an earlier version of quaternion - valued 4-d modulation scheme based on two different frequencies was proposed in . however , due to the change of polarisation of the transmitted radio frequency signals during the complicated propagation process including multipath , reflection , refraction , etc , interference will be caused to each other at the two differently polarised receiving antennas . to solve the problem ,efficient signal processing methods and algorithms for channel equalisation and interference suppression / beamforming are needed for practical implementation of the proposed 4-d modulation scheme .recently , quaternion - valued signal processing has been introduced and studied in details to solve problems related to three or four - dimensional signals , such as vector - sensor array signal processing , and wind profile prediction .with most recent developments in this area , especially the derivation of quaternion - valued gradient operators and the quaternion - valued least mean square ( qlms ) algorithm , we are now ready to effectively solve the 4-d equalisation and interference suppression / beamforming problem associated with the proposed 4-d modulation scheme .now the dual - channel effect on the transmitted signal can be modelled by a quaternion - valued iir / fir filter . at the receiver side , for channel equalisation , we can employ a quaternion - valued adaptive algorithm to recover the original 4-d signal , which inherently also performs an interference suppression operation to separate the original two 2-d signals .moreover , multiple antenna pairs can be employed at the receiver side to perform the traditional beamforming task to suppress other interfering signals .note although quaternion - valued wireless communication employing multiple antennas has been studied before , such as the design of orthogonal space - time - polarization block code in , to our best knowledge , it is the first time to study the quaternion - valued equalisation and interference suppression / beamforming problem in this context . in the following , the 4-d modulation scheme based on two orthogonally polarised antennas will be introduced in sec . [sec : modulation ] and the required quaternion - valued equalisation and inter - channel interference suppression solution and their extension to multiple dual - polarised antennas are presented in sec .[ sec : equal_beam ] .simulation results are provided in sec .[ sec : sim ] , followed by conclusions in sec .[ sec : concl ] .in traditional polarisation diversity scheme , as shown in fig . [fig : trantennapair ] , each side is equipped with two antennas with orthogonal polarisation directions and the signal being transmitted is two - dimensional , i.e. complex - valued with one real part and one imaginary part . in the quaternion - valued modulation scheme , the signal is modulated across the two antennas to generate a 4-d modulated signal .such a signal can be conveniently represented mathematically by a quaternion .a quaternion is a hypercomplex number defined as where is the real part of the quaternion , and , and are the three imaginary components with their corresponding imaginary units , and , respectively . as an example , corresponding to the 4-qam ( quadrature amplitude modulation ) in the two - dimensional case , for the 4-d modulation scheme , , , and can take values of either or , representing different symbols .we can call this scheme 16-qqam ( quaternion - valued qam ) or 16- .the signal transmitted by the two antennas will go through the channel with all kinds of effects and arrive at the receiver side , where the two antennas with orthogonal polarisation directions ( note the orthogonal polarisation may not give the best performance ) will pick up the two signals . againthe four components of the received signal can be represented by another quaternion .we use ] to represent the transmitted and received 4-d quaternion - valued signals , respectively .then the channel effect can be modeled by a filter with quaternion - valued impulse response ] is the quaternion - valued additive noise , as shown in fig .[ fig : channelmodel ] . to recover ] or estimate the channel , as in the 2-d case ( complex - valued ) , we can design a quaternion - valued equaliser .one choice is a reference signal based equaliser , among many others corresponding to the complex - valued case .now assume we have a reference signal ] . the cost function is given by =e[n]e^*[n]\;,\ ] ] where =r[n]-\hat{s}_t[n]=r[n]-\textbf{w}^{t}\textbf{s}_r[n]\;.\ ] ] with being the equaliser coefficient vector and ] ^t\nonumber\\ \textbf{s}_r[n]&=&[s_r[n ] , s_r[n-1 ] , \cdots , s_r[n - l+1]]^{t}\;.\end{aligned}\ ] ] following the derivations in , we have the gradient of ] and \textbf{s}^h_r[n]\} ] is a 4-tap quaternion - valued fir filter with a gaussian - distributed coefficients value .the equaliser filter has a length of .the learning curve based on averaging simulation runs is shown in fig .[ fig : learningcurve ] , with about db error at the steady state , indicating a reasonable channel estimation result . in the second set of simulations , we consider a mimo array and the two transmitted quaternion - valued signals have the same normalised power , with an snr of 20 db at the transmitter side .all the other parameters are the same as the first one . fig .[ fig : learningmimo ] shows the result , again a reasonable performance .a 4-d modulation scheme using quaternion - valued representation based on two antennas with different polarisation directions has been studied for wireless communications .a quaternion - valued signal processing algorithm is employed for both equalisation and interference suppression / beamforming .two sets of simulation results were provided to show that such a scheme can work effectively in both the single - input - single - output and multiple - input - multiple - output cases and therefore can be considered as a viable approach for future wireless communication systems .o. m. isaeva and v. a. sarytchev , `` quaternion presentations polarization state , '' in _ proc .2nd ieee topical symposium of combined optical - microwave earth and atmosphere sensing _ , atlanta , us , april 1995 , pp .195196 . x.r. zhang , w. liu , y. g. xu , and z. w. liu , `` quaternion - valued robust adaptive beamformer for electromagnetic vector - sensor arrays with worst - case constraint , '' , vol .104 , pp . 274283 , november 2014 .m. d. jiang , w. liu , and y. li , `` a general quaternion - valued gradient operator and its applications to computational fluid dynamics and adaptive beamforming , '' in _ proc . of the international conference on digital signal processing _ , hong kong , august 2014 .
|
quaternion - valued wireless communication systems have been studied in the past . although progress has been made in this promising area , a crucial missing link is lack of effective and efficient quaternion - valued signal processing algorithms for channel equalisation and beamforming . with most recent developments in quaternion - valued signal processing , in this work , we fill the gap to solve the problem and further derive the quaternion - valued wiener solution for block - based calculation . polarisation diversity , four - dimensional modulation , quaternion - valued signal processing , equalisation , beamforming .
|
we are concerned in this paper with blow - up phenomena arising in the following nonlinear heat problem : where and stands for the laplacian in .the exponent is subcritical ( that means that if ) and is given by by standard results , the problem has a unique classical solution in , which exists at least for small times . the solution may develop singularities in some finite time .we say that blows up in a finite time if satisfies in and is called the blow - up time of . in such a blow - up case , a point called a blow - up point of if and only if there exist such that as . + in the case , the equation is the semilinear heat equation , problem has been addressed in different ways in the literature .the existence of blow - up solutions has been proved by several authors ( see fujita , levine , ball ) . consider a solution of which blows up at a time .the very first question to be answered is the blow - up rate , i.e. there are positive constants such that the lower bound in follows by a simple argument based on duhamel s formula ( see weissler ) . for the upper bound , giga and kohn proved in for or for non - negative initial data with subcritical .then , this result was extended to all subcritiacal without assuming non - negativity for initial data by giga , matsui and sasayama in .the estimate is a fundamental step to obtain more information about the asymptotic blow - up behavior , locally near a given blow - up point .giga and kohn showed in that for a given blow - up point , where , uniformly on compact sets of .+ this result was specified by filippas ans liu ( see also filippas and kohn ) and velzquez , ( see also herrero and velzquez , , ) . using the renormalization theory , bricmont and kupiainen showed in the existence of a solution of such that where merle and zaag in obtained the same result through a reduction to a finite dimensional problem . moreover, they showed that the profile is stable under perturbations of initial data ( see also , and ) .+ in the case where the function satisfies with and , we proved in the existence of a lyapunov functional in _ similarity variables _ for the problem which is a crucial step in deriving the estimate .we also gave a classification of possible blow - up behaviors of the solution when it approaches to singularity .in , we constructed a blow - up solution of the problem satisfying the behavior described in in the case where satisfies the first estimate in or is given by .+ in this paper , we aim at extending the results of to the case ] , and this is possible at the expense of taking the particular form for the perturbation .we aim at the following : [ theo : lya ] let be fixed , consider a solution of equation. then , there exist , and such that if , then satisfies the following inequality , for all , (s_2 ) - \mathcal{j}_a[w](s_1 ) \leq - \frac{1}{2}\int_{s_1}^{s_2}\int_{\mathbb{r}^n}(\partial_sw)^2\rho dy ds.\ ] ] as in and , the existence of the lyapunov functional is a crucial step for deriving the blow - up rate and then the blow - up limit . in particular , we have the following : [ theo : blrate ] let be fixed and be a blow - up solution of equation with a blow - up time .+ ( * blow - up rate * ) there exists such that for all , where is defined in and is a positive constant depending only on and a bound of .+ ( * blow - up limit * ) if is a blow - up point , then holds in ( is the weighted space associated with the weight ) , and also uniformly on each compact subset of . we will not give the proof of theorem [ theo : blrate ] because its proof follows from theorem [ theo : lya ] as in .hence , we only give the proof of theorem [ theo : lya ] and refer the reader to section 2 in for the proofs of and respectively .the next step consists in obtaining an additional term in the asymptotic expansion given in of theorem [ theo : blrate ] .given a blow - up point of , and up to changing by and by , we may assume that in as .as in , we linearize around , where is the positive solution of the ordinary differential equation associated to , such that ( see lemma a.3 in for the existence of , and note that is unique . for the readers convenience , we give in lemma [ ap : lemma3 ] the expansion of as ) .+ let us introduce , then as and ( or for simplicity ) satisfies the following equation : where and , , satisfy ( see the beginning of section [ sec : refasy ] for the proper definitions of , and ) .+ it is well known that the operator is self - adjoint in .its spectrum is given by and it consists of eigenvalues .the eigenfunctions of are derived from hermite polynomials : + - for , the eigenfunction corresponding to is } \frac{m!}{k!(m - 2k)!}(-1)^ky^{m - 2k},\ ] ] - for : we write the spectrum of as for , the eigenfunction corresponding to is where is defined in .+ we also denote and for any and .+ by this way , we derive the following asymptotic behaviors of as : [ theo : refinedasymptotic ] consider a solution of equation which blows - up at time and a blow - up point .let be a solution of equation .then one of the following possibilities occurs : + , + there exists such that up to an orthogonal transformation of coordinates , we have there exist an integer number and constants not all zero such that the convergence takes place in as well as in for any and some . in our previous paper , we were unable to get this result in the case where satisfies with ] , there exists and large enough such that for all , + where is defined in .+ note that obviously follows from the following estimate , where and .+ in order to derive estimate , considering the first case , then the case , we would obtain .+ directly follows from an integration by part and estimate .indeed , we have replacing by and using , we then derive .this ends the proof of lemma [ lemm : esth ] .we assert that theorem [ theo : lya ] is a direct consequence of the following lemma : [ lemm : lya ] let be fixed and be solution of equation .there exists such that the functional of defined in satisfies the following inequality , for all , (s ) \leq - \frac{1}{2}\int_{\mathbb{r}^n}w_s^2\rho dy + \gamma s^{-(a+1)}\mathcal{e}[w](s ) + cs^{-(a+1)},\ ] ] where , is given in lemma [ lemm : esth ] .let us first derive theorem [ theo : lya ] from lemma [ lemm : lya ] and we will prove it later . differentiating the functional defined in ,we obtain (s ) & = \frac{d}{ds}\left\{\mathcal{e}[w](s)e^{\frac{\gamma}{a}s^{-a } } + \theta s^{-a}\right\}\\ & = \frac{d}{ds}\mathcal{e}[w](s)e^{\frac{\gamma}{a}s^{-a } } - \gamma s^{-(a+1)}\mathcal{e}[w](s)e^{\frac{\gamma}{a}s^{-a } } - a\theta s^{-(a+1)}\\ & \leq - \frac{1}{2 } e^{\frac{\gamma}{a}s^{-a}}\int_{\mathbb{r}^n}w_s^2\rho dy + \left[c e^{\frac{\gamma}{a}s^{-a } } - a\theta\right]s^{-(a+1 ) } \quad \text{(use \eqref{equ : estimatede}).}\end{aligned}\ ] ] choosing large enough such that and noticing that for all , we derive (s ) \leq -\frac{1}{2 } \int_{\mathbb{r}^n}w_s^2\rho dy , \quad \forall s \geq \tilde{s}_0.\ ] ] this implies inequality and concludes the proof of theorem [ theo : lya ] , assuming that lemma [ lemm : lya ] holds .it remains to prove lemma [ lemm : lya ] in order to conclude the proof of theorem [ theo : lya ] .multiplying equation with and integrating by parts : for the last term of the above expression , we write in the following : this yields from the definition of the functional given in , we derive a first identity in the following : (s ) = -\int_{\mathbb{r}^n } |w_s|^2\rho dy + \frac{p+1}{p-1}e^{-\frac{p + 1}{p-1}s}\int_{\mathbb{r}^n } h\left(e^{\frac{s}{p-1}}w\right)\rho dy & \nonumber\\ - \frac{1}{p-1 } e^{-\frac{ps}{p-1}}\int_{\mathbb{r}^n } h\left(e^{\frac{s}{p-1}}w\right)w\rho dy&. \label{equ : id1}\end{aligned}\ ] ] a second identity is obtained by multiplying equation with and integrating by parts : using again the definition of given in , we rewrite the second identity in the following : (s ) + 2\frac{p-1}{p+1}\int_{\mathbb{r}^n } |w|^{p+1}\rho dy\nonumber\\ & - 4e^{-\frac{p + 1}{p-1}s}\int_{\mathbb{r}^n } h\left(e^{\frac{s}{p-1}}w\right)\rho dy + 2e^{-\frac{ps}{p-1}}\int_{\mathbb{r}^n } h\left(e^{\frac{s}{p-1}}w\right)w\rho dy.\label{equ : id2}\end{aligned}\ ] ] from , we estimate (s ) & \leq -\int_{\mathbb{r}^n } |w_s|^2\rho dy\\ & + \frac{1}{p-1}\int_{\mathbb{r}^n}\left\ { \left|(p+1)e^{-\frac{(p+1)s}{p-1 } } h\left(e^{\frac{s}{p-1}}w\right ) - e^{-\frac{ps}{p-1}}h\left(e^{\frac{s}{p-1}}w\right)w \right| \right\}\rho dy . \end{aligned}\ ] ] using of lemma [ lemm : esth ] , we have for all , (s ) \leq -\int_{\mathbb{r}^n } |w_s|^2\rho dy + \frac{c_0 s^{-(a+1)}}{p-1}\int_{\mathbb{r}^n } |w|^{p+1}\rho dy + cs^{-(a+1)}.\ ] ] on the other hand , we have by , (s ) + \frac{p+1}{p-1}\int_{\mathbb{r}^n } |w_s w| \rho dy \\ & \quad + \frac{2(p+1)}{p-1}\int_{\mathbb{r}^n } \left ( \left| e^{-\frac{p + 1}{p-1}s } h\left(e^{\frac{s}{p-1}}w\right ) \right|+ \left | e^{-\frac{ps}{p-1 } } h\left(e^{\frac{s}{p-1 } } w\right)w \right| \rho dy \right).\end{aligned}\ ] ] using the fact that for all and of lemma [ lemm : esth ] , we obtain (s ) + \epsilon \int_{\mathbb{r}^n } |w_s|^2\rho dy \\ & \quad + \left(\epsilon + cs^{-a}\right)\int_{\mathbb{r}^n } |w|^{p+1}\rho dy + c.\end{aligned}\ ] ] taking and large enough such that for all , we have (s ) + \frac{1}{2}\int_{\mathbb{r}^n } |w_s|^2\rho dy + c , \quad \forall s > s_1.\ ] ] substituting into yields with .this concludes the proof of lemma [ lemm : lya ] and theorem [ theo : lya ] also .this section is devoted to the proof of theorem [ theo : refinedasymptotic ] and theorem [ theo : pro ] .consider a blow - up point and write instead of for simplicity . from of theorem[ theo : blrate ] and up to changing the signs of and , we may assume that as , uniformly on compact subsets of .as mentioned in the introduction , by setting ( is the positive solution of such that as ) , we see that as and solves the following equation : where and , , are given by .\end{aligned}\ ] ] by a direct calculation , we can show that ( see lemma [ lemm : apb1 ] for the proof of this fact , note also that in the case where is given by and treated in , we just obtain as , and that was a major reason preventing us from deriving the result in the case ] and for all , it holds that where , , , and .multiplying by , then we write for all ] such that . on the intervalwhere , the grid is refined further and the entire procedure as for is repeated to yield and so forth . + before going to a general step , we would like to comment on the relation .indeed , when reaches the given threshold in the initial phase , namely when , we want to refine the grid such that the maximum values of equals to . by, this request turns into .since , it follows that , which yields .+ let , we set and ( note that which is a constant ) , and as the variables of , , .the index means that , and .the solution is related to by here , the time $ ] satisfies , and are two grid points determined by the approximation of ( denoted by ) uses the scheme with the space step and the time step , which reads + \tau_{k+1 } f\left(u_{k+1}^{i , n}\right),\label{equ : scheme2}\end{aligned}\ ] ] for all and with ( note from introduction that is an integer since ) . + as for the approximation of , the computation of needs the initial data and the boundary condition .from and the fact that , we see that hence , from the first identity in , the initial data is simply calculated from by using a linear interpolation in space in order to assign values at new grid points .the essential step in this new mesh - refinement method is to determine the boundary condition through the second identity in .this means by a linear interpolation in time of .therefore , the previous solutions , , are stepped forward independently , each on its own grid . more precisely , since , then is stepped forward once every time steps of ; once every time steps of , ... on the other hand , the values of , , ... must be updated to agree with the calculation of . when , then it is time for the next refining phase .+ we would like to comment on the output of the refinement algorithm : * let be the time at which the refining takes place , then the ratio , which indicates the number of time steps until , reaches the given threshold , is independent of and tends to a constant as .* let be the _ refining solution_. if we plot on , then their graphs are eventually independent of and converge as .* let be the interval to be refined , then the quality behaves as a linear function of .these assertions can be well understood by the following theorem : [ theo:3 ] let be a blowing - up solution to equation , then the output of the refinement algorithm satisfies : + the ratio is independent of and tends to a constant as , namely assume in addition that of theorem [ theo : pro ] holds , + defining for all , we have the quality behaves as a linear function , namely where , and .note that there is no assumption on the value of in the hypothesis in theorem [ theo:3 ] .it is understood in the sense that blows up in finite time and its profile is described in theorem [ theo : pro ] . as we will see in the proof that the statement concerns the blow - up limit of the solution and the second one is due to the blow - up profile stated in theorem [ theo : pro ] .+ let is the real time when the refinement from to takes place , we have by , where is such that .this means that on the other hand , from of theorem[ theo : pro ] and the definition of , we see that combining and yields where represents a term that tends to as . + since , we then derive by the definition of and , we infer that ( we can think as the _ live time _ of in the -th refining phase ) .hence , \\ & = \frac{1}{\tau_{k}}\left(m^{-1}\kappa\right)^{p-1}(h_{k-1}^2 - h_k^2 ) + o(1)\\ & = \frac{h_k^2}{\tau_{k}}\left(m^{-1}\kappa\right)^{p-1}(\lambda^{-2 } - 1 ) + o(1).\end{aligned}\ ] ] since the ratio is always fixed by the constant , we finally obtain which concludes the proof of part of theorem [ theo:3 ] .+ since the symmetry of the solution , we have .we then consider the following mapping : for all , we will show that is independently of and converges as .for this purpose , we first write in term of thanks to and , where and .+ if we write of theorem [ theo : pro ] in the variable through , we have the following equivalence : where is given in .+ from , and , we derive then multiplying both of sides by and replacing by , we obtain from the definition of , we may assume that combining this with , we have since and the fact that , we have from that which follows .thus , it is reasonable to assume that and tend to the positive root as .hence , using the definition of , we have which follows + o(1),\ ] ] where is the constant given in the definition of .+ substituting this into and using again the definition of , we arrive at let , we then obtain the conclusion .+ from and the fact that as , we have using , we then derive which yields the conclusion and completes the proof of theorem [ theo:3 ] .this subsection gives various numerical confirmation for the assertions stated in the previous subsection ( theorem [ theo:3 ] ) .all the experiments reported here used as the initial data , as the parameter for controlling the interval to be refined , as the refining factor , as the stability condition for the scheme , and in the nonlinearity given in .the threshold is chosen to be satisfied the condition . in table[ tab:1 ] , we give some values of corresponding the initial data and the initial space step .= 8 mm .the value of corresponds to the initial data and the initial space step . [cols="<,<,<,<,<",options="header " , ]the following lemma from gives the expansion of , the unique solution of equation satisfying : [ ap : lemma3 ] let be a positive solution of the following ordinary differential equation : assuming in addition as , then takes the following form : where with and .see lemma a.3 in .we aim at proving the following : let us write where then , we write where the term is already treated in , and it is bounded by to bound , we use the fact that satisfies to write \\ & + \left ( 1 - \frac{\nu}{\kappa}\right)e^{-\frac{ps}{p-1}}h\left(e^\frac{s}{p-1 } \phi \right).\end{aligned}\ ] ] noting that with , uniformly for and , and recalling from lemma [ ap : lemma3 ] that where , then using a taylor expansion , we derive this concludes the proof of lemma [ lemm : apb2 ] .
|
we consider a blow - up solution for a strongly perturbed semilinear heat equation with sobolev subcritical power nonlinearity . working in the framework of similarity variables , we find a lyapunov functional for the problem . using this lyapunov functional , we derive the blow - up rate and the blow - up limit of the solution . we also classify all asymptotic behaviors of the solution at the singularity and give precisely blow - up profiles corresponding to these behaviors . finally , we attain the blow - up profile numerically , thanks to a new mesh - refinement algorithm inspired by the rescaling method of berger and kohn . note that our method is applicable to more general equations , in particular those with no scaling invariance . + _ keywords : _ blow - up , lyapunov functional , asymptotic behavior , blow - up profile , semilinear heat equation , lower order term .
|
the forc method was originally designed for completely irreversible systems , modeled as a collection of preisach hysterons , each of which has a rectangular hysteresis loop ( fig .[ hyst ] ) . as we lower the field from a large positive saturating value , it switches down at a field usually denoted by , ( the subscript r stands for `` reversal '' , for reasons that will become apparent later ) and as we increase the field it switches back up at a field .the preisach distribution is the density of these hysterons in the plane ( fig .[ prei1 ] ) .mh loop of a single preisach hysteron , showing down - switching field and up - switching field , and defining the bias field and the coercivity .,width=240 ] the fundamental result behind the forc idea is that this preisach distribution can be obtained by measuring `` first order reversal curves '' that is , by saturating the sample in the positive direction , decreasing the field to a reversal field ( see fig .[ forc ] ) , then reversing dh / dt from negative to positive and measuring the magnetization as the field increases again past each value . a major hysteresis loop with two forc curves , with a dot showing the point where is defined.,width=288 ] the distribution of hysterons is then given by in sec .[ deriv ] we will give a derivation of this result , using a discrete formulation .we will show that this distribution fails to include reversible effects , and show that the information not included in can be thought of as the first derivative evaluated at , which vanishes in an irreversible system of preisach hysterons with nonzero coercivity .for the case of purely reversible hard axis particles characterized by saturation fields , we show that a derivative of this quantity with respect to can be interpreted as a distribution of saturation fields .thus we can extract the distribution of both irreversible and reversible particles from the forc data .we will begin by giving a simple derivation of eq .[ dist ] , relating the distribution of hysterons to a mixed partial derivative of the forc function .although this formula is given in every paper on forc , it is surprisingly hard to find a derivation . rather than give a derivation in the continuum limit, we will derive a discrete analog on a grid with a finite field spacing ( fig .[ prei1 ] ) , which becomes eq .[ dist ] in the limit but is easier to visualize . the preisach plane , showing a regular grid in the variables and in the relevant half - plane .it is conventional to draw the and axes diagonally so the coercivity and bias field axes ( defined in fig .[ hyst ] ) can be horizontal and vertical .the points inside the green stripe make up a single forc curve at ., width=288 ] fig .[ prei1 ] shows the points in the plane at which the forc function is measured .when we begin a forc curve by reducing the field to , for example as shown in fig .[ prei1 ] , we flip downward all the hysterons having , i. e. , those in the blue area of fig .[ hr ] , whose total saturation moment we will denote by .preisach plane after reducing the field to .hysterons in blue - shaded region have been switched down .total magnetic moment is now .,width=288 ] it is related to the total remaining moment ( the factor of is because flipping an object with saturation moment changes the total moment by ) .preisach plane after raising the field from to the hysterons in the pink triangle have switched back up , leaving moment ( shaded in blue ) .the green region is the additional area that would be flipped if we used instead ., width=288 ] if we then increase the field to , the hysterons in the pink triangle of fig .[ strip ] , which have upward switching field , switch back up , leaving only the hysterons in the blue area of fig .[ strip ] flipped , whose total moment we denote by , giving overall system moment if we now repeat this process with a smaller , the additional hysterons in the green strip in fig .[ strip ] will have flipped , with total moment , so the moment in the green strip is . expressing this difference in terms of the forc function by using eq .[ flip ] , the cancels and we get , as indicated in fig .[ strip ] , which can be expressed in terms of the discrete derivative , which we denote by and define by \ ] ] this definition is indicated pictorially in fig .[ strip ] by a dumbbell labeled with + and - signs at the points where is to be added and subtracted .if we repeat this process again with a larger , we will get the moment of the orange strip in fig .[ plaq ] , which is .graphical demonstration that the saturation moment in the green preisach plaquette is $ ] for .the signs on the four black dots indicate the signs of the four terms ., width=288 ] the difference , the saturation moment of the hysterons in the green square ( `` preisach plaquette '' ) in fig .[ plaq ] , is then times a second derivative \ ] ] we define a ( preisach ) density of hysterons such that the total saturation moment in a plaquette centered at is .we include the factor so that has units of magnetic moment/(field) , and is independent of in the limit . then we have which becomes the continuum equation ( [ dist ] ) in the limit .our objective is to extract information separately for irreversible and reversible parts of a system .conceptually , it is simplest to think of an `` easy - hard mixture '' of stoner - wohlfarth particles with their easy and hard axes along the field , respectively .the easy axis particles switch completely irreversibly at some fields and as in fig .[ hyst ] , and the hard axis particles switch reversibly : is exactly linear until it saturates at some `` saturation fields '' and ( fig .[ hard ] ) , which can have different magnitudes if we allow a bias .hysteresis loop of a biased hard - axis stoner - wohlfarth particle.,width=240 ] we have shown that the preisach distribution completely describes the irreversible particles . in a reversible system ,on the other hand , if we change the magnetic moment by lowering the field from to and then raise it to again , this reverses the magnetization change and we return to the same magnetization at , independently of .that is , the derivative with respect to ( which we have denoted by ) is exactly zero , as is the second derivative the forc distribution is exactly zero .this makes it clear that the usual forc distribution does not completely determine the original forc function . to get a function by integrating its derivatives ,one needs boundary conditions .it turns out that we can do one integration : we can get the first derivative from the second derivative , by adding the plaquettes in the green region of fig .[ strip ] , because the boundary condition at the other end ( ) is known : in this limit , so all derivatives , including , are zero . knowing , we could obtain everywhere by integrating along the axis , if we knew a boundary condition on .we do not know this at the lower right ( ) , but it would be sufficient to know it along the boundary ( the axis ) .but this is just the usual hysteresis loop , which contains both irreversible and reversible information we want to separate these .however , the other first derivative , vanishes exactly along this boundary in an irreversible system ( if the coercivity is at least ) , so this is a candidate for describing the reversible part .it contains all the rest of the information in the forc function , in the sense that along the boundary can be obtained by alternately adding and along a zig - zag path along the axis ( fig .[ zig ] ) .graphical demonstration that the derivative along the left boundary ( ) , together with the other derivative which is determined by the irreducible distribution , uniquely determines the forc function along the zig - zag line at the left boundary , and therefore the entire forc function .also , can be determined everywhere by adding the moments of the `` reversible plaquettes '' along the left boundary , since we know the boundary condition at the top ., width=288 ] the treatment of irreversible effects has been discussed extensively in the literature in an `` extended forc '' distribution this function is added to the irreversible forc as a dirac delta function at zero coercivity . in this paper , however , we want to separate the reversible and irreversible behaviors . in a model system consisting of a single easy axis ( irreversible ) and a single hard axis ( reversible ) particle we would like to get a single dirac delta function in each of the irreversible and reversible forc distributions , each giving the properties of the corresponding particle .to this end , we note that a similar function has been used to extract anisotropy distributions of hard - axis systems . since is linear for a hard - axis particle ( fig . [ hard ] ), its derivative is a step function , and the second derivative has dirac delta functions at and .thus if there is a distribution of and , the corresponding part of is proportional to this distribution .more precisely , .however , this can not be used for a mixed system because will be contaminated by the irreversible particles . to obtain a distribution describing only reversible particles, we must start instead with \ ] ] which we have shown vanishes near the boundary for an irreversible system .the signs of these two terms for the point labeled `` in fig .[ zig ] are indicated by + and - signs . for a hard - axis reversible particle , is independent of and linear in , so is constant except at the saturation fields. thus the second derivative indicated by the ' ' reversible plaquette " in fig .[ zig ] vanishes except at the saturation fields , and can be regarded as the saturation field distribution of the reversible particles : \end{gathered}\ ] ] for , this is negative and gives the distribution of ; for , it is positive and gives the distribution of .note , however , that it does not give the joint distribution of and , in the way that gives the joint distribution of and .the most straightforward way to visualize the discrete irreversible forc distribution is to paint each plaquette with a color density proportional to .even if the data is noisy , so plaquettes with high density are next to ones with low density , this scheme takes advantage of the natural averaging capability of the human eye : if one looks at such a display from a little further way , the fluctuations average out , in a way that they would not if we used a color - coding other than density ( intensity ) .. however , most commercial visualization software is not designed to display uniform - color plaquettes : it wants to interpolate the color continuously , which in the present case just obscures the simplicity of the discrete distribution .for example , a very sharp peak will give density in only a single plaquette this sharpness will be obscured by interpolation or averaging . the only way to directly control the color of each plaquette is to code the visualization at the lowest level currently all computer displays use opengl functions to display `` primitives '' ( triangles , in our case ) .accordingly , we are working on a c++ code that uses direct calls to opengl functions .it is well known that commercial visualization software that is usually used to visualize the forc distribution , which uses interpolation or extrapolation , can create artifacts , especially at the boundaries of the displayed region , which make it appear that there is a nonzero density when in fact it is almost zero .this can occur along artificial boundaries , _i. e. _ , at the ends of the forc curves or along the last forc curve ( with largest or smallest ) , or at the natural boundary ( _ i .e. _ , ) . at the natural boundary, extrapolation can also lead to the mixing of irreversible and reversible effects , which our scheme separates cleanly . in some cases, the data might be so noisy that the averaging capability of the eye is not enough then we can use a smoothing procedure ( for example , fitting to a polynomial before extracting the mixed partial derivative) .in this paper we have derived a method for forc analysis that optimally separates reversible and irreversible behavior it gives an irreversible forc distribution that vanishes identically in a reversible system , and a reversible forc distribution that vanishes identically in an irreversible system . in a simple `` easy - hard mixture '' of stoner - wohlfarth particles , completely describes the distribution irreversible ( easy - axis ) particles , and completely describes the distributions of both upper and lower saturation fields . 00 c. r. pike , `` forc diagrams and reversible magnetization '' , phys .b * 68 * , 104424 ( 2003 ) . c. r. pike , c. a. ross , r. t. scalettar , and g. zimanyi , `` forc diagram analysis of a perpendicular nickel nanopilar array '' , phys . rev .b * 71 * , 134407 ( 2005 ) .dobrota and alexandru stancu , `` what does a forc diagram really mean ?a study case : array of ferromagnetic nanowires '' , j. app . phys . * 113 * , 043928 ( 2013 ) .a. p. roberts , d. heslop , x. zhao , and c. r. pike , rev . geophys . *52 * , 557 - 602 ( 2014 ) .forc+ ( software to produce a forc curve from the raw output file of an agm or vsm ) is scheduled for beta release in january 2017 , on http://magvis.org / forc+ .we explicitly exclude particles with easy axes at other angles : see a. j. newell , `` a high - precision model of forc functions for single - domain ferromagnets with uniaxial anisotropy '' , geochem .. geosyst .* 6 * , q05010 ( 2005 ) .j. m. barandiaran , m. vazquez , a. hernando , j. gonzalez , and g. rivero , ieee trans .* 25 * , 3330 ( 1989 ) .z. lu , p. b. visscher , and j. w. harrell , `` anisotropy - graded media : magnetic characterization '' , j. appl .phys . * 103 * , 07f507 ( 2008 ) .r. egli , a. p. chen , m. winklhofer , k. p. kodama , and c .- s .horng , `` detection of noninteracting single - domain particles using forc diagrams '' , geochem .. geosyst . * 11 * , q01z11 ( 2010 ) .
|
first order reversal curves ( forcs ) have been used for a number of years for the extraction of information from magnetization measurements . the results are most unambiguous for irreversible processes for a collection of preisach hysterons , one gets a `` forc distribution '' , the number of hysterons with given downward & upward reversal fields . there have been many proposals for dealing with reversible behavior , usually involving inserting it somehow into the irreversible forc distribution . here we will try to do the opposite , to separate them into another function which we will call the ( reversible ) `` saturation field distribution '' , which is identically zero for a completely irreversible system of hysterons , while the irreversible forc distribution is identically zero for a reversible system . thus in a system with both purely reversible and purely irreversible components , such as single - domain stoner - wohlfarth particles with hard or easy axis along the field , this approach cleanly separates them . for more complicated systems , as with conventional forc distributions , it at least provides a `` signature '' making it possible to identify microscopic models that might give a particular pair of irreversible and reversible distributions .
|
theoretical as well as experimental studies have indicated that many complex systems under the influence of stochastic perturbations can undergo sudden `` regime shifts '' in which they abruptly shift to a contrasting state .such shifts may occur in systems with alternative stable states .well studied examples of regime shifts include sudden collapse of ecosystems , the onset of collapse in mutualistic communities , abrupt climatic shifts , the crash of markets in global finance , systemic failures such as the epileptic seizures and even the eruptive events in spreading fires . despite rich advances in the theory of complex systems , understanding the mechanism which trigger the onset of regime shifts in nature remains a challenge .there are mainly two types of regime shifts that can occur in systems with alternative stable states .one is _ critical transition _ which is associated with the _ bifurcation points _ ( so called tipping points ) and another is _ noise induced transition _ ( also known as stochastic switching ) .much effort has been devoted in recent years in developing _ early warning signals _ of impending regime shifts between alternative stable states .such early warning signals can have tremendous impact on managing natural systems by forewarning the systems under the threat of state shift , so that appropriate management strategies can be initiated to prevent a catastrophe .recent progress in this direction suggests that a set of generic statistical indicators ( e.g. , increase in variance , autocorrelation ) may forewarn an impending transition in a wide range of complex systems .a recent review on `` dynamical disease '' argued that an early detection of regime shifts even in the field of medical sciences , such as in cardiac arrhythmia s could be of some help to prevent sudden death .however , these signals are mostly developed for the phenomenon of critical slowing down that arises in the vicinity of tipping points .for purely noise induced transitions , such as we also examine in this paper , there is an active debate about whether early warning signals can really be useful . in genetic regulatory systems, it is known that random cell - to - cell variations within a genetically identical population can lead to regime shifts between alternative stable states of gene expression ( i.e. , sudden transition in protein concentration ) .when the underlying genetic system contains regulatory positive feedback loops , individual cells can exist in different steady states .some cells may , for example , live in the `` off '' expression state of a particular gene , whereas others are in the `` on '' expression state .these stochastic fluctuations in gene expression , commonly referred to as noise , have been proposed to cause transitions between these states .a well known example of bistable gene expression with coupled stochastic transition is the induction of the _ lac _ operon in _ e. coli _ which results in the synthesis of protein _ -galactosidase _ required for breaking up sugar molecules and releasing energy to the cell . experimental study on _ -galactosidase _ shows that sudden transition from unregulated ( low level ) to regulated ( high level ) _-galactosidase _ state of lac operon occurs at a critical point of an inducer concentration .a potential application of early warning signals in gene expression is to identify increased risk of sudden transitions in protein concentration and prevent complex disease onset .recently the concept of regime shifts with associated early warning signals has been used in systems medicine .the expectation is to foresee a sudden catastrophic shift in health condition which may result in a extreme transition to a disease state .now a days it is believed that a detailed understanding of regime shifts in disease onset will provide broad applications in the field of medicine .already detected sudden transitions in medical science include epileptic seizures , depression , pulmonary disease , diabetes mellitus , etc . for the case of type 1 diabetes mellitus , _-cells _ in the pancreas do not produce protein hormone _ insulin _ , which sometimes is a consequence of `` switched off '' state of hla - encoding genes in the cell ( `` switched on '' state of genes in -cell corresponds to activation of insulin production ) .this is due to the fact that genes carry the instructions that cells use for protein production .hence , bistability via `` switching on '' or `` off '' states in `` gene expression '' is also an important topic to study for regime shifts in systems medicine . in this paper, we study a stochastic version of bistable gene regulatory positive feedback loop model to explore the robustness of early warning indicators of regime shifts , for both cases , the critical transition and noise induced transition .we investigate the effect of additive and multiplicative noise intensities , and cross correlation intensity between two noises on the model by calculating the probability density and potential function.we find that increasing the intensity of additive noise induces regime shifts from low to high protein concentration state and vice versa for an increase in multiplicative noise intensity . whereas an increase in the cross correlation intensity from a negative to a positive value between the two noises induce regime shifts from high to low protein concentration state .we also compute mean first passage time ( mfpt ) for escape over the potential barrier .we discuss how one can regulate the production levels of protein . to this end, we apply the early warning signals of regime shifts on the simulated time series data of the stochastic model to examine how successfully one can forewarn regime shifts . our work presents a novel framework for using early warning signals to identify regime shifts , and also their key limitations , in a gene regulatory system .finally , in the discussion section , we conclude the paper with a discussion of the main results together with the applicability and importance of our study .it is well known that autoactivating positive feedback loop is one of the simplest circuit motifs able to exhibit bistable states in gene regulatory process . in the circuit ,a single gene encodes a transcriptional factor activator ( tf - a ) , and the tf - a activates its own transcription when bind to a responsive element ( tf - re ) in the dna sequence ( see fig .[ sch ] for a schematic representation ) .the transcription factor activator tf - a is referred to as regulatory protein , is used to control genetic regulation and acts as a pathway mediating a cellular response to a stimulus .the tf - a constructs a homodimer which then binds to tf - re or dna regulatory site .gene incorporates a tf - re and when homodimers bind to the tf - re , transcription of tf - a is increased .homodimer binding to responsive element is independent phosphorylation of dimers .however , transcription can be activated only by phosphorylated tf - a dimers .some dimers phosphorylation will depend on activity of kinases and phosphatases which is controlled by external signals .hence , this genetic circuit incorporates signal - activated transcription as well as positive feedback on tf - a synthesis .let , denotes the protein tf - a , ( ) denotes the unbound(bound ) state of the dna promoter , then the equilibrium reactions can be written as : + + where is the basal expression rate , is cellular volume , is degraded with a rate constant , represents cooperativity in binding , binding(unbinding ) of tf - a homodimer to dna regulatory site with a rate constant ( ) and is protein production rate of tf - a .letting ] and ] , , {k_{d}}} ] .the analyses in this work have been carried out using the above model with dimensionless variable and parameters . for certain parameter values ,( [ eq : diml ] ) has three equilibria , of which two are attractors and third is a saddle point intermediate between the two attractors ( see fig . [fig : bif_pd ] ) .the saddle point works as a basin boundary between the two stable equilibrium points . a thorough analysis of the deterministic model is given in .hereafter , in eq .( [ eq : diml ] ) for sake of simplicity we use , , and in place of , , and respectively , and eq . ( [ eq : diml ] ) becomes : if and , then eq . ( [ eq : fl ] ) exhibits bistability for a range of the parameter . to avoid lengthy calculations , we now replace in eq .( [ eq : fl ] ) , in the rest of this paper .bifurcation diagram for the deterministic model ( black ) and the stochastic model ( green ) .the parameter values are ( for the deterministic model ) and with , and ( for the stochastic model ) .stable steady states are marked with continuous lines and unstable steady states are marked with dashed lines , respectively . the stationary probability distribution for the additive noise model , eq . ( [ eq:3 ] ) , for different values of the production rate is shown in cyan - red - white scale ( color bar , logarithmic scale ) . ] to model ( [ eq : fl ] ) , we investigate the effects of additive and multiplicative noise on the alternative steady states of the gene regulatory circuit .we make this attempt because in the presence of noise , a bistable system may trigger _regime shifts_. it is well known that noise is inherent in any natural system . in this work , we consider that dynamics of the protein concentration ( ) are affected by _ multiplicative _ and _ additive _ noises , similar to hasty et al .these noise terms can induce sudden shifts in the concentration of protein ( ) . in a bistable system , in the absence of noise the system will eventually converge to one of its two stable fixed points . in which fixed point it will converge depends upon the initial condition .however , the presence of noise in the system will cause fluctuations in the steady states , which may lead to switching between two different stable states or there can be sudden transition from one stable state to the other stable state . in order to introduce the multiplicative and additive noise terms in eq .( [ eq : fl ] ) , we consider the one - variable langevin equation in the general form as : where and represent gaussian white noise .these noise terms have the following statistical properties : , , , and , where and measure the level of noise strengths of and respectively , is the cross correlation between them , and and denote two different moments .\(a ) + + ( b ) + in eq .( [ eq:3 ] ) the additive noise alters the background protein production .it is also known that in gene expression , transcription is a complex sequence of reactions , thus it is expected that this part of the gene regulatory sequence is also to be affected by fluctuations of many intrinsic or extrinsic parameters .this implies the fact that the transcription rate ( ) can be considered as a random variable . to vary the transcription rate stochastically , we consider .hence , with the aforementioned modification , eq .( [ eq : fl ] ) ( for ) becomes : where , and .hence , the noise is multiplicative , as compared to the additive noise . in the next section we study the stochastic model eq .( [ eq : main ] ) through a combination of _ analytical _ and _ simulation _ techniques .our main aim is to investigate the effects of multiplicative noise intensity , additive noise intensity and cross correlation strength between two noises on the regime shifts between the high and low protein concentration states . in the analytical techniquethese effects are studied by calculating _probability densities , potential functions _ and _ mfpt_. the simulation technique is complementary to the analytical technique , showing how the dynamical properties are captured in our analytical results can be seen within individual realizations based on example parameter sets , and adding information about how observed protein concentrations are arranged in time in these examples . in simulations, we also produce _ time series _ exhibiting regime shifts that can be analyzed using the same techniques as could be applied to _ real time series data _ ( see figs .[ sts](a)-(b ) ) .the example times series can be divided into two broader classes : ( a ) critical transition time series ( fig . [ sts](a ) ) and ( b ) purely noise induced transition time series ( fig .[ sts](b ) ) .we begin this section by writing down the fokker - planck equation for the evolution of probability density of the dynamical variable .let denotes the probability density , which is the probability that the protein concentration attains the value at time .then , the fokker - planck equation ( fpe ) of corresponding to eq .( [ eq : main ] ) is given by : + \frac{\partial^2}{\partial x^2}[b(x)p(x , t)]\;,\ ] ] where ^ 2 + \sigma_2 + 2\lambda\sqrt{\sigma_1\sigma_2}\ ; g(x).\end{aligned}\ ] ] the limit of as yields the _ stationary _ probability density function ( spdf ) of , which we denote as .the spdf , which is the stationary solution of the fpe in eq .( [ eq : fp1 ] ) is given by ( see appendix s1 for more details ) : ,\nonumber \\ & = & \frac{n_{c}}{\sigma_1\left[g(x)\right]^2 + \sigma_2 + 2\lambda\sqrt{\sigma_1\sigma_2}\ ; g(x)}\nonumber \times \\ & & \!\!\!\!\!\!\!\!\ ! \exp\left[\int^x\frac{f(x')+\sigma_1g(x')g'(x')+\lambda\sqrt{\sigma_1\sigma_2}\ ; g'(x')}{\sigma_1\left[g(x')\right]^2 + \sigma_2 + 2\lambda\sqrt{\sigma_1\sigma_2}\ ; g(x')}dx'\right],\nonumber\\ \label{eq : spdf1}\end{aligned}\ ] ] potential landscapes demonstrating how changes in a system parameter can cause decrease in resilience of equilibrium point .recovery rates upon stochastic fluctuations are lower if the basin of attraction is small ( d ) than that of a larger basin of attraction ( a ) .the effect of reduced resilience can be determined by stochastic fluctuations induced in a system state ( ( b ) and ( e ) ) as increased standard deviation ( s.d . ) and lag-1 autocorrelation ( ( c ) and ( f ) ) .data sets to plot this figure are generated from eq .( [ eq : main ] ) with , , : ( a - c ) and ( d - f ) .,title="fig : " ] + potential landscapes demonstrating how changes in a system parameter can cause decrease in resilience of equilibrium point .recovery rates upon stochastic fluctuations are lower if the basin of attraction is small ( d ) than that of a larger basin of attraction ( a ) .the effect of reduced resilience can be determined by stochastic fluctuations induced in a system state ( ( b ) and ( e ) ) as increased standard deviation ( s.d . ) and lag-1 autocorrelation ( ( c ) and ( f ) ) .data sets to plot this figure are generated from eq .( [ eq : main ] ) with , , : ( a - c ) and ( d - f ) .,title="fig : " ] where is normalization constant . equation ( [ eq : spdf1 ] ) can also be put in the form : where ^{2 } + \sigma_2 + 2\lambda\sqrt{\sigma_{1}\sigma_{2}}\;g(x)\right ] \nonumber\\ -\int^{x}\frac{f(x')dx'}{\sigma_{1}\left[g(x')\right]^2 + \sigma_{2 } + 2\lambda\sqrt{\sigma_{1}\sigma_{2}}\;g(x')}\;,\label{eq : pot1}\end{aligned}\ ] ] is called stochastic potential of the system .the potential function maps the equilibria of dynamical systems and their basins of attraction , by analogy to a `` energy landscape '' in which the system state tends to move `` downhill '' .extending this concept to stochastic dynamical systems gives a probabilistic potential that complements the spdf in characterizing the asymptotic behavior of the considered system .next we calculate and for three different cases concerning the effects of multiplicative and additive noise on anticipating regime shifts in gene expression .we also calculate bifurcation diagram ( see fig . [fig : bif_pd ] ) with changing the maximum transcription rate for both the deterministic and stochastic model .note that , there is an enlargement of the bistability region for the case of correlated noise . in fig .[ fig : bif_pd ] , we depict the stationary probability distribution of the additive noise model for different values of , which is shown in white - red - cyan colorbar .this gives an idea about how extrema of stationary probability distribution is changing with the parameter .now , using analytical techniques we first determine the parameter space where the system persists bistability still in the presence of stochasticity . then , for a specific set of parameter values , we simulate time series of the system and finally using ews indicators , we will try to detect forthcoming regime shifts .first , we present the effect of an additive external noise source on the regime shifts between high and low concentrations level of protein in gene expression .hence , only the additive noise term is present in eq .( [ eq : main ] ) and we assume that . bifurcation diagram and stationary solutions of the additive noise model are same as for the deterministic model . for , the fpe ( [ eq : fp1 ] )can be written as : +\sigma_{2}\frac{\partial^2}{\partial x^2}[p(x , t)],\ ] ] and similar to eq .( [ eq : spdfpot1 ] ) the stationary solution is written as : where is the potential and is the normalization constant .[ cols= " < , < " , ] here we assess the robustness of early warning signals to forewarn upcoming shifts to an alternative regime by analyzing simulated time series data from the considered stochastic model of genetic regulation .the simulation approach is showing how the characteristics captured in our analytical calculations can also be seen within individual time series realizations based on specific parameter sets .early warning signals are observable statistical signatures that antecede some state shifts .these signals have mainly been derived from the phenomenon of critical slowing down ( csd ) which arises in the vicinity of bifurcation wherein its dominant eigenvalue will cross zero .as the eigenvalue reduces to zero , the response of the system becomes very slow in recovering from perturbations and certain statistical features , such as _ increased variance _ and _ lag- autocorrelation _ , are predicted to appear in the time series analysis as a result ( reviewed in scheffer et al .although many regime shifts appear as sudden transitions to another state in natural systems , the actual fact that not all regime shifts are associated with a bifurcation .hence not all regime shifts associated with different mechanisms are expected to exhibit csd . in dakos , it is pointed out that csd based early warning signals `` are not a panacea for anticipating all types of regime shifts '' .below we present two mechanisms those are responsible for observable regime shifts in our simulated time series .one is associated with the saddle - node bifurcation and exhibits csd and another is associated with stochastic switching ( ss ) ( i.e. , purely noise - induced transition and not associated with bifurcation ) and does not exhibit csd .there has been a recent debate about the success of early warning signals for predicting stochastic switching induced regime shifts . keeping that in mind , we felt that it is worthwhile to present what occurs in our system .for our analyses , we consider three different stochastic time series , each for csd and ss : first we consider the presence of additive noise only , second we consider the presence of multiplicative noise only and finally presence of both noises with correlation in the system . to obtain the time series ,stochastic simulations were performed in matlab ( r2011a ) using the euler - maruyama method and a standard integration step - size of 0.001 . in our simulated time series , we visually identify shifts between low to high protein concentration and vice versa for both the cases , csd and ss ( see figs . [ csd ] and [ ss ] ) .then we took different time series segments ( the gray shaded regions in figs .[ csd ] and [ ss ] ) of different lengths ( keeping in mind that their time lengths should be less than their mfpts ) preceding a regime shift and analyzed those time series for the presence of early warning signals .the early warning signals toolbox " ( http://www.early-warning-signals.org/ ) is used to perform the statistical analyses . to ensure stationarity in residuals , we used gaussian detrending with bandwidth 40 , on the time series data before performing any statistical analysis. then using a moving window size of half the length of the considered time series ( i.e. , of the considered time series segment ) , we calculate the variance and autocorrelation in our state variable , , as these two indicators are both very commonly applied to anticipate regime shifts .the autocorrelation at lag-1 is measured by the autocorrelation function ( acf ) : }{\sigma^2},\ ] ] where is the value of the state variable at time , and and are the mean and variance of .variance is the second moment around the mean and measured as : where is the number of observations within the considered time frame . a concurrent increase in both of these indicators forewarn an impeding regime shift .in table i , we summarize results of early warning signals ( see figs .[ csd ] and [ ss ] ) for the considered two different cases of regime shifts with three subcases each .the signals [ variance ( ) and autocorrelation ( ) ] are indicated as `` '' if there is a concurrent rise ; indicated as `` '' if there is a rise , but some false pick in the signals before the regime shift due to the presence of large fluctuations in the data ; and indicated as `` '' if looking at the signals , it is not possible to forewarn an impending regime shift .[ t1 ] l l l l l + + + + & & & & + _ additive _ & + & + & & + & & & & + & & & & + & + & + / & + & + + & & & & + & & & & + _ correlated : _ & & & & + _ additive and _ & + & + / & & + _ multiplicative _ & & & & + the result in table i shows that in the case of csd with all three different types of stochasticity applied to model ( [ eq : main ] ) , the variance is robust and always successfully predict upcoming regime shifts for all the three types of noise. however , the lag-1 autocorrelation is only successful to predict regime shifts for the case of additive noise only and ca nt predict very positively an upcoming regime shift for multiplicative and correlated noise .this establishes the fact that even for the case of csd the variance is more robust than autocorrelation in detecting regime shifts in genetic regulation and it is line with the findings in . for ss , both of the early warning signals are not very successful in detecting regime shifts . using the ews indicators it is not possible to detect upcoming regime shifts for additive and correlated noises ( see fig .however , in the presence of multiplicative noise , the early warning signals give positive results .this is indeed a nice result given the fact that early warning signals are not developed for noise induced transitions , but rather for csd which is associated with bifurcations .nevertheless , our results show that even for the case of csd though the ews somewhat gives positive results , still some regime shifts may not be triggered by _a concurrent _ rise in variance and autocorrelation .this is due to the fact these csd indicators are reliable for specific systems and also have some key limitations .like , there is false positive and false negative errors in data .some other limitations include size of the window which varies across the literature and how much data are required for analyzing ews is still not clear .moreover , variance is a robust indicator as compared to autocorrelation in the case of csd and this is due to the fact that autocorrelation is affected by the length of time series ( see appendix s2 for more details ) and more sensitive to false alarms . in the case of stochastic switching, anticipating regime shifts is more difficult due to sudden transitions .the significance of these findings are discussed in the discussion section below .in this paper , we investigated a stochastic genetic regulatory circuit using analytical techniques and numerical simulation to anticipate regime shifts in protein concentration . in this respect , we derived an approximate fokker - planck equation from the langevin equation .we studied the effects of intensity of both the additive and multiplicative external noise and their correlation on the gene regulatory system .the present study suggests that the presence of additive noise in the system induces regime shifts from a low protein concentration to a high protein concentration state , whereas multiplicative noise induces regime shifts from a high to a low protein concentration state .moreover , we also show that how a correlation between the additive and multiplicative noise is important in determining regime shifts and hence can regulate the production of protein levels .furthermore , an increase in the cross correlation intensity from a negative to a positive value between the two noises induce regime shifts from high to low protein concentration state . herewe have also computed the robustness of steady states using the mfpt .our mfpt result uncovers the fact that an increase in mfpt of right potential well with the maximum production rate promotes regime shifts from left potential well to the right potential well and vice versa for an increase in the cross correlation intensity .in addition , we used a recently developed tool ( http://www.early-warning-signals.org/ ) of early warning signals to anticipate regime shifts in the considered gene regulatory system using simulated time series data for both the csd and ss .extensive literature on regime shifts is available for ecosystems , financial markets , climatic shifts , etc .however , to the best of our knowledge , there is very less work available on anticipating regime shifts in many areas of developmental biology , specifically in genetic regulation .here for the first time , for a genetic regulatory system , we used the time series analysis based ews approach to predict upcoming regime shifts .as anticipated by some previous authors , we also find that the ews of raising variance and autocorrelation are in general sensitive to false alarms and not always successful to reliably predict impending state shifts in our model .we found that the ews are moderately more successful when we analyzed time series data in advance to a state shift in the case of csd , whereas in the case of ss , it is specific to particular noise only .we observed some key limitations and statistical issues of ews , such as false pick in data , size of the window and length of time series data . for our model , we also verified that rising variance appears as a robust indicator of csd as compared to lag- autocorrelation .this is mainly due to the fact that autocorrelation is sensitive to the length of time series .for accurate estimation of autocorrelation , we need long and equidistant time series data which are not always available for real systems . however , variance is insensitive to this effect and as a result , measuring autocorrelation as ews may increase the possibility of false alarms . while testing ews to our simulated time series data , we also observed that selection of window size is another major problem for getting a positive signal of regime shifts .in the case of ss , anticipation of regime shifts is extremely difficult because a bifurcation point is neither approached nor crossed and there is suddenly a phase transition .anticipating regime shifts in gene regulatory system ( aka in protein concentration level ) can be useful to prevent disease onset and progression which may intercept unacceptable sudden transitions from a healthy state to a disease state .examples of such regime shifts are asthma attacks , epileptic seizures and sudden deterioration of complex diseases . a well documented example of regime shift is type 1 diabetes ( t1d ) which is a form of diabetes mellitus .t1d is a chronic inflammatory disease caused by insufficient production of insulin by cells in the pancreas .the genetic association of t1d is that the production of insulin through cells depends on hla - encoding genes .if t1d associated genes are in `` on '' expression state , then cells produce insulin and releases insulin into the blood stream , but if they are in `` off '' expression state cells fail to produce insulin which leads to t1d . prediction of regime shifts in gene expression using ews from normal ( `` on '' ) to diabetic ( `` off '' ) state could prevent the switch to diabetic state and help to maintain the level of insulin . in summary , our work reveals that stochasticity can have diverse complex effects on genetic regulatory systems .early warning signals to anticipate forthcoming regime shifts in gene expression requires special attention to the underlying various statistical issues and limitations .in addition , to select a suitable window size and data length raise further difficulties .the main challenge of detecting early warning signals includes risk of false alarms and failed detections .one needs a deeper understanding of early warning signals of regime shifts , and how a balance between early warning signals and false alarms is achieved , will lead to important new insights in genetic regulation .moreover , our results establish the important fact that finding a more robust indicator of regime shifts in complex natural systems is still in its infancy and demands extensive research .we hope that this study may also increase the interest among researchers to find a more robust indicator for detecting upcoming sudden transition in much broader class of systems in developmental biology .* appendix s1 * * derivation of stationary probability density function . * ( pdf ) + * appendix s2 * * additional examples of early warning signals . * ( pdf )p.s.d . acknowledges financial support from isird , iit ropar grant no . iitrpr / acad./52 .the authors thank t. banerjee and ramesh a. for their helpful comments on the manuscript .we thank david frigola for sharing his code on the stationary probability distribution .conceived and designed the experiments : ys psd . performed the experiments : ys psd . analyzed the data : ys psd .contributed reagents / materials / analysis tools : ys psd akg . wrote the paper : psd ys .m. scheffer , s. r. carpenter , t. m. lenton , j. bascompte , w. a. brock , v. dakos , j. van de koppel , i. a. van de leemput , s. a. levin , e. h. van nes , m. pascual , and j. vandermeer . . ,338:344348 , october 2012 .i. a. van de leemput , m. wichers , a. o. j. cramer , d. borsboom , f. tuerlinckx , p. kuppens , e. h. van nes , w. viechtbauer , e. j. giltay , s. h. aggen , c. derom , n. jacobs , k. s. kendler , h. l. j. van der maas , m. c. neale , f. peeters , e. thiery , p. zachar , and m. scheffer .critical slowing down as early warning for the onset and termination of depression ., 111(1):8792 , 2014 .j. g. venegas , t. winkler , g. musch , m. f. v. melo , d. layfield , n. tgavalekos , a. j. fischman , r. j. callahan , g. bellani , and r. s. harris .self - organized patchiness in asthma as a prelude to catastrophic shifts . , 434:777782 , 2005 .
|
considerable evidence suggests that anticipating sudden shifts from one state to another in bistable dynamical systems is a challenging task , examples include ecosystems , financial markets , complex diseases , etc . in this paper , we investigate the effects of additive , multiplicative and cross correlated stochastic perturbations on determining regime shifts in a bistable gene regulatory system , which gives rise to two distinct states of low and high concentrations of protein . we obtain the stationary probability density and mean first passage time of the system . we show that increasing additive(multiplicative ) noise intensity induces regime shift from a low(high ) to a high(low ) protein concentration state . however , an increase in cross correlation intensity always induces regime shifts from high to low protein concentration state . for both bifurcation ( often called tipping point ) and noise induced ( called stochastic switching ) regime shifts , we further explore the robustness of recently developed critical slowing down based early warning signal ( ews ) indicators ( e.g. , rising variance and lag-1 autocorrelation ) on our simulated time series data . we identify that using ews indicators , prediction of an impending bifurcation induced regime shift is relatively easier than that of a noise induced regime shift in the considered system . moreover , the success of ews indicators also strongly depends upon the nature of noise . our results establish the key fact that finding more robust indicator to forewarn regime shifts for a broader class complex natural systems is still in its infancy and demands extensive research .
|
water acts in many ways , from dissolving salt to creating the medium for all life on earth .it is so important and multifaceted that whole books can be and are written about it .see , for example , david eisenberg s and walter kauzmann s timeless and recently re - issued monograph on the physical properties of water .our charge in these three lectures is to describe something about this material from a molecular perspective . in limiting scope , we focus on thermal fluctuations and their consequences on solvation and self - assembly .the presentation is like that of a textbook chapter , not a comprehensive review .we make use of statistical mechanics at the level it is treated in ref .while our focus is on water , what we say applies to much of liquid matter , where good background is found in ref .barrathansen2003 . the perspective we adoptis influenced by the results of computer simulations because this approach provides unambiguous microscopic detail , albeit for idealized models .combined with experiments and theory , simulation is central to all modern understanding of condensed matter .water is a most important example .computer simulations of water were pioneered in the 1970s by aneesur rahman and frank stillinger .many subsequent advances have validated their general approach and enhanced our understanding of water .reviews of that early work in refs . and remain informative to this day .our first lecture covers properties of pure water , particularly distribution functions related to local arrangements of water molecules .the second lecture is about free energies of solvation and how these free energies are related to the statistics of spontaneous molecular fluctuations in water .the third and last lecture builds from that stage to treat forces of self assembly , especially hydrophobic forces , which act on molecular and supramolecular scales .figure [ fig : waterstructure ] illustrates the water molecule and its most significant interaction the hydrogen bond . though the molecule is quite polar , its electron density is dominated by the electrons of the oxygen atom . as such , the space - filling volume of a water molecule is approximately spherical with van der waals radius of , like that of its isoelectronic partner , neon . because this volume is roughly spherical , it is often convenient to identify the position of a water molecule with the position of its oxygen nucleus .the chemical bond is about long and the angle is about .theory for dewetting requires treatment of both interfaces and small length - scale fluctuations .high free - energetic costs of solvating excluded volumes at small length scales gives way to lower costs in the presence of soft interfacial fluctuations .here , we describe how to build a theory that captures this physics with a density field that describes interfaces and a coupling of that field to small - length scale fluctuations .the development uses some elements of statistical field theory , and while it is therefore a step beyond the simplicity adopted in the earlier parts of our lectures , good textbooks on the topic do exist .see , for instance , mehran kardar s kardar2007b .interfaces are well described by density fields that vary slowly on molecular scales . in the simplest case , the energetics of such a field , ,is given by a landau - ginzburg hamiltonian of the form = \int \text{d}\mathbf{r } \left [ w(n(\mathbf{r } ) ) + \frac{m}{2 } | \nabla n(\mathbf{r } ) |^2 \right ] \,,\ ] ] where we use subscript `` l '' to indicate that this hamiltonian applies to a fluid on _ large _ length scales only .the quantity is a local ( grand canonical ) free energy density in units of , and the parameter determines the free energy cost to create an inhomogeneity . in mean field theory , the average is the function that minimizes this hamiltonian ( subject to whatever constraints specify the ensemble considered ) .this minimization produces a spatially invariant value for , except at conditions of phase coexistence where an interface separates volumes with different values of the field .the interfacial tension for this interface is proportional to and the shape of the interface is determined by the function .these relationships can be read about in standard texts , e.g. ref .rowlinsonwidom2003 .the actual molecular density , , can be written as a slowly varying field like plus a correction , where the correction accounts for fluctuations that occur on small - length scales .in particular , we take where is the small - length scale part . there is flexibility in defining this decomposition , but it is important that varies little over a length , which is correlation length of the homogeneous liquid . with this generic criterion ,the vapor phase of water is where is close to zero , and the liquid phase is where is close to the density of liquid water .equation describes the energetics of , and the interface it forms is the liquid - vapor interface . to the extent that is a constant equal to the average density of the liquid, will have the gaussian hamiltonian , i.e. , = \frac{1}{2}\int d\mathbf{r } \int d\mathbf{r } ' \,\delta \rho(\mathbf{r})\ , \chi^{-1}(\mathbf{r},\mathbf{r}')\ , \delta \rho(\mathbf{r})\ ] ] where is the functional inverse of the density - density correlation function ,i.e. , .the subscript `` s '' indicates that this hamiltonian applies to _ small_-length scale fluctuations .the presence of a large enough solute will force the fluid to be inhomogeneous on large length scale . to account for that possibility, the two fields and must be coupled , and the simplest way to do so is with a bi - linear form .specifically , we take = h_{\mathrm{l}}[n(\mathbf{r } ) ] + h_{\mathrm{s}}[\delta \rho(\mathbf{r } ) ] + h_{\mathrm{i}}[n(\mathbf{r } ) , \delta \rho(\mathbf{r})]\,,\ ] ] with = \int d \mathbf{r}\int d \mathbf{r } ' \,n(\mathbf{r})\,u(\mathbf{r } , \mathbf{r}')\,\delta \rho(\mathbf{r } ' ) \,+\ , h_{\mathrm{norm}}[n(\mathbf{r})],\ ] ] where ] stands for _ interaction _ , and the symmetric function specifies the strength and range of the interaction between the two fields while the unperturbed liquid partition sum in eq. leads back to the landau - ginzburg description at large - length scales , a non - trivial alteration occurs in the presence of imposed inhomogeneity . in the context of possible de - wetting ,the most important of these alterations comes from excluded volume due to a solute .this constrains the partition sum in a fashion that is compactly described with the functional = \begin{cases } 1,&\text{when for all ,}\\ 0,&\text{otherwise } , \end{cases}\ ] ] where denotes the volume that the solute excludes from the solvent .the volume can be complicated , indeed not even contiguous .see for instance the excluded volumes depicted in fig . [ fig : solvationcartoon ] of lecture two .the excluded volume constraint is common to all solutes , hydrophobic or hydrophilic . for the latter , a different constraint functionalcould also be employed , one that binds solvent a regions of space adjoining the excluded volumes . with the constraints imposed by the solute ,the partition sum over small - length scale density fluctuations is then \}\,c_{v}[n(\mathbf{r } ) + \delta \rho(\mathbf{r } ) ] = \exp\{-\beta \overline{h}_v[n(\mathbf{r } ) ] \}\,,\ ] ] where = h_{\mathrm{l}}[n(\mathbf{r})]\,+\,\delta h_v[n(\mathbf{r})]\ ] ] + the alteration to the landau - ginzburg hamiltonian , ] . 1 .if the excluded volume is not much larger than the correlation volume of the unperturbed liquid , , or if it is composed of several distantly separated excluded volumes , each one not much larger than a correlation volume , then ] is no longer constant and is relatively large when for . to avoid this energetic penalty , probable configurations of the slowly varying fieldswill adjust to make within the excluded volume .as such , according to eq . , the probability for configurations of the slowly varying field will be maximal at configurations with a liquid - vapor - like interface adjacent to the excluded volume .this latter situation is that of dewetting .regions of space where dewetting occurs is where an excluded volume causes the average value of to be zero .it occurs only for cases of sufficiently large excluded volumes .the field governed by eq . is a continuum version of an ising model or lattice - gas model .we see from the theory sketched above that the underlying physics of dewetting is captured by coupling of an ising - like field to a small - length scale field through excluded volume perturbations . indeed , with a single fixed parameter the strength parameter , the results of these equations agree quantitatively with those of computer simulations , as we have recently shown in detail in ref .varillypatelchandler2011 . the first general theoretical treatment of dewetting in water and its role in hydrophobicity was provided by the lum - chandler - weeks ( lcw ) theory .that theory is the mean - field approximation to what we have presented above .in particular , where is the field that minimizes ] is then taken to be a lattice - gas hamiltonian , and \approx \sum_i c\ , n_i v_i$ ] , where is a positive constant , and is the volume excluded by solutes in cell .that is , \approx - \epsilon \sum_{i , j}{\,}^ { ' } \,n_i\,n_j\ , + \,\sum_i ( c\ , v_i - \rho\ , \mu)\,n_i\,,\ ] ] where is the chemical potential of the liquid and the primed sum is over nearest neighbors .the terms account for excluded volume in liquid - like regions , with the free energy cost for this excluded to be proportional to the size of that volume .proportionality to excluded volume is approximately consistent with small - length scale hydrophobic free energies of solvation , as we explained in lecture two .this version of the theory is particularly easy to implement . with judicious choices of lattice spacing and the constant , most qualitative features of dewetting and hydrophobic forces of assemblyare captured correctly .the principal feature it fails to capture is the slow approach to the macroscopic surface - area scaling illustrated in fig .[ fig : solvationscaling ] . instead of a broad crossover from small to large - length scale hydrophobicity , this simplest version exhibits a relatively abrupt crossover around 1 nm . this deficiency can be ameliorated , as we have detailed in ref .varillypatelchandler2011 , but the simplicity of the model described with eq . makes it an attractive choice for easy estimates of roles of hydrophobic forces .figure [ fig : twc02trajectory ] shows a result obtained from this form of modeling .depicted are snap shots of a trajectory illustrating the collapse of a chain of hard spheres in water .the chain follows newtonian dynamics with friction and random forces reflecting the effects of the small - length scale field that has been integrated out .the liquid s dynamics is monte carlo .motion of the chain is reflected in changes of , and as such the liquid s slowly varying field couples to the dynamics of the chain .the two move together illustrating the collective nature of hydrophobic forces of assembly .the intra - chain forces for the chain considered in that figure are such that the extended chain is most stable configuration in the gas phase .solvation , in this case hydrophobic forces of assembly , make the globule state the most stable configurations in the liquid .the half - life of the extended chain in the liquid is about 1 .the pictured trajectory shows parts of a trajectory during the relatively short period of time when the chain passes from its metastable extended state to the stable globule state .the transition state occurs when a random fluctuation of the chain produces a large enough cluster of hard spheres to nucleate a soft liquid - vapor - like interface . at that stage, water moves away with relatively low free energy cost and the chain collapses to its most stable thermodynamic state . in effect, the reorganization of the chain boils away the water .it is a suggestive of the possibility of a nano - scale steam engine , albeit in a much more complicated device than a simple hydrophobic chain .it is a possibility that we ponder and wonder if it exists in some biological molecular motor .example collapse trajectory of a model hydrophobic polymer embedded in solvent .frame ( a ) shows a typical extended configuration . frames ( b ) , ( c ) and ( d ) are snapshots from a collapse trajectory , with the configuration shown in frame ( c ) being a transition state .the white cubes are the cells in the lattice gas where is . in this model , the cell size is chosen to be . from ref .tenwoldechandler2002.,width=491 ] , , 3rd edition ( benjamin cummings ) 1999 , eq . ( 4.58 ) .the displacement field of a point charge is given in example 4.5 , and an additional factor of appears due to different electrostatic unit conventions .
|
this paper is the written form of three lectures delivered by one of us ( dc ) at the international school of physics `` enrico fermi '' , course clxxvi , `` complex materials in physics and biology '' , held in varenna , italy in july 2010 . it describes the physical properties of water from a molecular perspective and how these properties are reflected in the behaviors of water as a solvent . theory of hydrophobicity and solvation of ions are topics included in the discussion . [ citation : d. chandler and p. varilly , in _ proceedings of the international school of physics `` enrico fermi '' _ , vol * 176 * , edited by f. mallamace and h. e. stanley ( ios , amsterdam ; sif , bologna , 2012 ) , pages 75111 . http://dx.doi.org/10.3254/978-1-61499-071-0-75 ]
|
discrete models have a long and successful history in systems biology , beginning with boolean network representations of molecular networks and their later generalization , so - called logical models .they are qualitative , time - discrete models that are particularly suitable for the analysis of steady state behavior of molecular networks . however , as models become larger it is increasingly difficult to analyze them . in order to keep the analysis of such networks tractable ,many studies have focused on specific classes of networks such as : single - switch , unate , nested canalizing , threshold , and , and - or , and linear networks . in order to be useful for modeling, a family of networks has to be `` sufficiently general '' for modeling biological interactions and `` simple enough '' for theoretical analysis . in this paperwe propose the family of and - not networks as such family .and - not networks are a particular the class of boolean networks that are constructed using only the and ( ) and not ( ) operators .a biological justification for the use of and - not networks is that there is evidence that for genes that are regulated by more than one other gene , the different binding sites exhibit synergistic effects between the different regulators .this fact motivated the study of conjunctive boolean networks , that is , networks whose logical rules are constructed using exclusively the and operator , where explicit formulas for steady states are given ; also , upper and lower bounds for the number and length of limit cycles are provided .but conjunctive boolean networks can not account for inhibitory regulation and the resulting negative feedback loops , which are common in gene regulatory networks .allowing the not operator , in addition to the and operator ( i.e. using and - not networks ) , can make the family of networks sufficiently general to be useful for modeling . for a formal argument that the family of and - not networks is general enough for modeling, we will show that any discrete model ( finite dynamical system , to be precise ) can be represented by an and - not network .more precisely , we present an algorithm that assigns to a given general discrete model an and - not network which has the same number of steady states , together with an algorithmic correspondence between steady states of the two networks .this is achieved by adding nodes to the network as needed .the potential drawback of this algorithm is of course that the network size can potentially get significantly larger , thereby potentially negating any computational advantage gained by the specialized logic .however , since molecular networks have typically small in - degree , this growth in the number of network nodes to be added is modest in the case of molecular network models .we demonstrate this through an analysis of several published models and random networks . to argue that and - not networks are simple enough for theoretical analysis , we will show how using the specialized logic of and - not networks can provide better theoretical results .for example , in , it was shown that an upper bound for the number of steady states can easily be computed for and - not networks ( which is not true for arbitrary networks ) .also , in , it was shown that the exact number of steady states of and - not networks are encoded in the topological features of the wiring diagram , and that , in some cases , the problem of finding the exact number of steady states can be transformed to the problem of finding maximal independent sets of the wiring diagram , which has been extensively studied . in this paper we will show how the specialized logic of and - not networks can give us better upper bounds for the number of steady states ; more precisely , we provide an upper bound for and - not networks that improves on previous upper bounds .furthermore , we show how this upper bound for and - not networks can actually be used for general networks .we use our results to analyze a boolean model of th - cell differentiation .another theoretical advantage of and - not networks is that they are in a one - to - one correspondence with their wiring diagrams .this observation has several implications , one of which is the possibility to relate dynamic network properties with features of the wiring diagram .also , from a given signed wiring diagram one can unambiguously construct and and - not network , which implies that all algorithms or results can be stated at the `` wiring diagram level . ''for a signed directed graph , we denote , and .that is , is the set of all incoming edges for node , and , resp . is the subset of positive , resp .negative , edges .all graphs in the rest of the paper will be signed directed graphs unless noted otherwise . in order to simplify the graphical representation ,we denote two negative ( positive ) edges between and by a bidirectional negative ( positive ) edge , ( ) .if the edges have different signs we denote them by .an _ and - not function _ is a boolean function , , such that can be written in the form where .if , then is the constant function 1 . if ( , respectively ) we say that or is a _ positive _ ( _ negative _ ) regulator of or that it is an activator ( repressor ) . an _ and - not network _ is a boolean network ( bn ) , , such that is an and - not function for all . and- not networks are also called _ signed conjunctive networks_. the _ wiring diagram _ of an and - not network is defined by a graph with vertices ( or ) and edges given as follows : ( , respectively ) if is a positive ( negative , respectively ) regulator of .notice that nodes corresponding to constant functions have in - degree zero .also , the wiring diagram of an and - not network contains all the information about the network ; that is , we only need to specify the wiring diagram in order to define an and - not network .[ eg : anbn ] consider the boolean network given by + .it is easy to see that is an and - not network .its wiring diagram is shown in figure [ fig : anbn ] .as mentioned in the introduction , some other families of networks that have been studied in the past are single - switch , linear , and , and - or , unate and nested canalyzing functions .each family has its own advantages ; however , for the purpose of modeling biological systems and for theoretical analysis , it is of interest to have the following properties : first , networks generated using these families should be able to admit a sign assignment ; that is , it should be possible to determine the sign of an interaction .second , in principle , it should be general enough to model all networks ; that is , it should be possible to model any type of regulation .third , for theoretical analysis , it would be useful to have a one - to - one correspondence between wiring diagrams and networks .this property would allow complete encoding of a network in its wiring diagram .the family of linear functions satisfies the third property but not the first two .the family of and functions satisfies the first and third property but not the second .the family of and - or functions satisfies the first property but not the last two .single - switch , unate , and nested canalyzing functions satisfy the first two properties but not the third . on the other hand , and - not networkssatisfy all three properties .the first property is satisfied because the sign of a regulation is given by the presence or absence of the not operator .the third property follows from the fact that if the positive and negative edges to are given by and , resp ., then the function for node is uniquely given by .the second property is given by the fact that any finite dynamical system can be expressed as an and - not network .more precisely , theorem [ thm : main ] guarantees that steady states are preserved if we rewrite a general finite dynamical system as an and - not network .in this section we show why and - not networks are a good framework for modeling biological systems .one issue that can potentially arise when only using certain classes of networks is that one can have difficulty in modeling certain processes .for example , the family of and networks does not allow modeling negative interactions .another example is that the family of linear networks , does not allow modeling signed interactions . in order for a family of networks to be useful for modeling ,is has to allow modeling any type of interaction .here we show that the family of and - not networks is general enough for modeling .more precisely , we show that for any finite dynamical system , there exists an and - not network ( possibly with more nodes ) such that they share key dynamical properties .[ thm : main ] let be a finite dynamical system , where and all s are finite .then , there exists an and - not network such that there is a bijection between the steady states of and .furthermore , and the bijection between steady states is given algorithmically .we say that is an and - not representation of .a simple proof uses the facts that any finite dynamical system can be written as a boolean network , and that any boolean function has a conjunctive normal form .in , the authors proved algorithmically that for any finite dynamical system , there exists a boolean network ( possibly with more nodes ) such that and have the same number of steady states .furthermore , the bijection of steady states is also given algorithmically .therefore , we only need to show that there exists and and - not network , such that there is a bijection between the steady states of and .we proceed by induction .first , consider the conjunctive normal form of : , where is of the form with ( function ) .notice that is an and - not function .then , define the bn in variables by for , and for .we now check that the function gives a one - to - one correspondence between steady states of and .suppose that , then + + ; that is , is a steady state .now , suppose that and notice that in this case ; then . also , .that is , is a steady state of .therefore , is a bn where are and - not functions and such that there is a one - to - one correspondence between the steady states of and . by induction, it follows that there is an and - not network together with a bijection between the steady states of and .therefore , there is a bijection between the steady states of and .furthermore , and the bijection are given algorithmically .the transformation of finite dynamical systems to boolean networks has been discussed in .so , in the rest of the paper we will focus on boolean networks and and - not networks . [ eg : basic ] consider the bn given by , , , , .the wiring diagram of is given in figure [ fig : basic ] ( left ) . in order to transform this bn to an and - not networkwe introduce the variable with boolean function and .variables and will be used in and . noticethat since appears again in , we can simply reuse to keep the number of extra variables as small as possible . thenthe and - not network is given by , , , , , , . the wiring diagram of is shown in figure [ fig : basic ] ( right ) .an additional step in the transformation that can keep the number of extra variables small is given by the following proposition .[ prop : andor ] let be a bn and define by , where . then and are dynamically equivalent .it is enough to notice that is invertible with inverse .then , ; that is , evaluating is equivalent to evaluating .if some functions of a bn are or - not functions , then we can use proposition [ prop : andor ] to transform the bn into a bn in the same number of variables such that the or - not functions become and - not functions .also , proposition [ prop : andor ] can be used to transform constant functions into constant functions ( if , then the -th coordinate function of is the constant function 1 ) .[ eg : basicor ] consider the bn given by , , .the wiring diagram of is in figure [ fig : basicor ] ( left ) .since is an or - not function , we can transform it to a and - not function using proposition [ prop : andor ] .consider , given by , with wiring diagram shown in figure [ fig : basicor ] ( right ) .then , is dynamically equivalent to an and - not network .notice that the effect of this transformation on the wiring diagram is simple , we simply change the signs of the edges around node 2 . as mentioned in , an advantage of transforming finite dynamical systems into boolean networksis that it can provide insight into the role of feedback loops by disentangling them . in this sense , transforming finite dynamical systems into and - not networks can pass all the information of the role of feedback loops to the wiring diagram . in this case , the wiring diagram is not only a rough representation of the network , but it encodes all the information of the network ; in this sense the wiring diagram `` becomes '' the network .this has the potential to reduce the problem of studying the structure of the state space graph ( which has elements ) to studying the structure of the wiring diagram of the and - not representation ( which has elements ) .this can help in understanding the precise role of the network topology in the network dynamics .a similar approach was used successfully to study conjunctive and linear networks . for practical purposesit is important to obtain an estimate of how much the and - not representation can increase the number of variables . for arbitrary boolean networks , the number of extra nodes can be exponential in the number of nodes . however , boolean models of biological systems are not arbitrary and are actually very sparse with very low in - degree ( typically described by a power law distribution ) .we will now show that in practice the number of variables introduced by the algorithm can be small ..number of extra variables introduced by the and - not representation .the number of nodes of and its and - not representation , , are denoted by , , respectively .the bns were taken from . [cols="<,<,>",options="header " , ] in order to study this question , we have applied the procedure to several published models in the literature and studied the question using randomly generated boolean networks .the first study shows that the increase in the number of variables for published models is modest ( table [ table : num ] ) .the number of variables was increased by 14% on average with a maximum value of 4 extra nodes . in order to determine the number of extra nodes introduced by our algorithm for more general bns, we did a statistical analysis . to mimic wiring diagrams coming from biological systems ,the edges followed a power law distribution and we considered the maximum in - degree less than or equal to for ( see appendix a for details ) .the results of this second study are shown in table [ table : k ] .for example , all networks where nodes have in - degree bounded by can be transformed to and - not networks without increasing the number of nodes . for networks where nodes have in - degree bounded by , our method increases the number of nodes by on average ( see appendix a for details ) .it is important to mention that in both tables , the growth in the number of extra nodes is far less than exponential . as mentioned in the introduction , the specialized logic of and - not networks can be used to obtain better theoretical results .such results can arise directly ( e.g. ) or by applying results about general boolean networks to the family of and - not networks .in this section we show examples of the latter .first , we need the following definitions .let be a feedback loop of a graph .we say that is a _strong _ feedback loop if there are no edges of the form , in such that .for example , consider the graph in figure [ fig : strong ] . the feedback loop is not strong because of the edges , ; and are not strong because of the edges , .all other feedback loops are strong .our first result in this section is an application of ( * ? ? ?* theorem 3.2 ) to the family of and - not networks ( see appendix b for the proof ) .[ thm : strong ] let be the wiring diagram of an and - not network , and suppose intersects all strong positive feedback loops of .then , the number of steady states is at most . consider the and - not network with wiring diagram given in figure [ fig : strong ] .the only strong positive feedback loops are and .since intersects them , theorem [ thm : strong ] guarantees that there are at most steady states . intuitively , theorem [ thm : strong ] is telling us which positive feedback loops contribute to the presence of steady states ; it says that they have to be strong .we also provide a slight generalization of theorem [ thm : strong ] .we need the following definition .a feedback loop of a graph is called _ inconsistent _ if there is a vertex such that there is a positive path of the form from to and a negative path of the form , from to such that , are not edges in and .when such vertex does not exist , we say that is _consistent_. for example , consider the graph in figure [ fig : completion ] .the positive feedback loop is inconsistent because of the paths and .the positive feedback loop is inconsistent because of the paths and . also , the positive feedback loop is inconsistent because of the paths and .then , the only consistent feedback loops are and .we say that a set _ dominates _ a graph if intersects all consistent positive feedback loop and for each feedback loop that is inconsistent and strong , intersects or contains at least one .for example , the set dominates the graph in figure [ fig : completion ] . with these definitions we have the following theorem that gives an upper bound on the number of steady states using topological features of the wiring diagram ( see appendix b for the proof ) .[ thm : badv ] let be the wiring diagram of an and - not network , and suppose dominates .then the number of steady states is at most .it is not difficult to see that the bound given by theorem [ thm : strong ] is greater than or equal than the bound given by theorem [ thm : badv ] .the next example shows that the inequality is in some cases strict .[ eg : completion ] consider the bn given by + its wiring diagram is shown in figure [ fig : completion ] .it is easy to see that intersects all strong positive feedback loops .then , theorem [ thm : strong ] gives the upper bound . on the other hand ,since dominates the wiring diagram , theorem [ thm : badv ] gives the upper bound 2 .that is , theorem [ thm : badv ] gave a better upper bound on the number of steady states .notice that in this case the actual number of steady states is 2 , namely , and .one might argue that having better results for and - not networks is not enough to justify their use .after all , since we are considering a smaller family of boolean networks we should of course obtain stronger results .however , the combination of theorem [ thm : main ] and results about and - not networks automatically generates theorems for all boolean networks .furthermore , such combination can in some cases provide stronger results .this deserves further explanation which is illustrated in figure [ fig : idea ] .consider a theorem about boolean networks that gives us information about certain dynamical properties , `` thm . '' . on the other hand , consider a similar theorem about and - not networks , `` thm. '' .then , given a boolean network , we have two choices , we can apply thm . to ; or , we can use theorem [ thm : main ] to find the and - not representation of , then apply , and then use theorem [ thm : main ] to obtain information about the original boolean network . in section [ sec - bio ]we use a published boolean model to show that the latter can give stronger results .for example , combining theorem [ thm : main ] and [ thm : badv ] we obtain the following theorem .[ thm : bnbound ] let be any boolean network and suppose that dominates the wiring diagram of its and - not representation .then , has at most steady states .we now show that this theorem can in fact provide a better upper bound for the number of steady states .we apply our results to the bn model proposed in for th - cell differentiation .the model is a bn in 23 variables , .below is the list of boolean functions .the wiring diagram is shown in figure [ fig : th ] . using our algorithms we obtain the and - not network , , shown in figure [ fig : than ] .it turns out that the set dominates the wiring diagram of ( see appendix c for details ) .then , by theorem [ thm : bnbound ] , the number of steady states of is at most . on the other hand , all previous results about steady states ( e.g. )give 8 as the upper bound .that is , using the and - not representation can provide a better upper bound , even for general boolean networks .the actual number of steady states of the model is 3 ( see for details ) .the results presented in this paper , together with other results in the literature , support that the family of and - not networks are general enough for modeling and simple enough for theoretical analysis . given any finite dynamical system , it is possible to create an and - not network such that they have similar dynamical properties .this has two implications : first , this means that using and - not networks in modeling does not pose any technical restriction on the type of interactions one can model .second , every result about and - not networks can be applied to general boolean networks , which can give better results ( e.g. theorem [ thm : bnbound ] ) .one potential drawback for this framework is that the and - not representation can have more nodes .however , for networks that arise from modeling biological systems , this increase in the number of nodes is modest ( section [ sec - growth ] ) .other advantages of using and - not networks are the following : first , all information about the network is actually contained in the network s wiring diagram .specifically , there is a one - to - one correspondence between and - not networks and graphs , so that the network can be reconstructed unambiguously from the wiring diagram . in the authors followed a similar approach to successfully study cascading effects .second , due to this correspondence , we can state all results about and - not networks using wiring diagrams only .this means that questions about and - not networks can be reformulated as questions about graphs ; then , one can use tools from graph theory and combinatorics to study them ( e.g. antichains , posets , inclusion - exclusion principle , independent sets ) .this deserves further investigation .finally , we point out that and - not networks are special cases of so - called _ nested canalyzing _ boolean networks . these were first introduced in as good candidates for models with biologically meaningful " regulatory rules , and have since been studied extensively . in concept was generalized to multi - state models , and it was shown there that the large majority of regulatory rules that appear in published models of biological networks are of this form .it was shown furthermore that nested canalyzing networks have dynamic properties one would expect to find in biological networks , such as short limit cycles and a small number of attractors .thus , the results in the present paper imply that in order to study the steady state behavior of general network models , one can focus on the very restrictive class of nested canalyzing networks , instantiated as and - not networks and make use of their very special properties .we describe here the details of the study to determine how many nodes are added by the construction of the and - not representation . to mimic wiring diagrams coming from biological systems ,the edges followed a power law distribution .more precisely , given fixed and a parameter , the probability for a node to have nodes is ( up to a normalization factor ) .for example , if , the probabilities of having , , and 4 nodes are , , and , respectively , where so that . also , to mimic biological regulation , we restricted our analysis to boolean functions that admitted a sign assignment for the edges .these boolean functions are called unate , biologically meaningful and regulatory functions . denote with the average number of extra nodes introduced by a boolean function in variables .then , a bn that follows the distribution mentioned above will have , on average , extra nodes .now , we need to estimate .consider a boolean function , , that depends on variables .for there are 2 functions , and and we do not need to introduce any new nodes ; then . for there are 8 functions and they are of the form or , where or . for functions of the form we do not introduce any new nodes , and for functions of the form we can use proposition [ prop : andor ] to transform to an and - not function , so we do not introduce new nodes either. then . for , there are 72 functions .an exhaustive - search analysis shows that of those 72 boolean functions , 16 introduce 0 nodes , 48 introduce 1 node , and 8 introduce 3 nodes ; then the average number of extra nodes in this case is . for , there are 1824 boolean functions .an exhaustive - search analysis shows that of those 1824 functions , 32 introduce 0 nodes , 320 introduce 1 node , 480 introduce 2 nodes , 960 introduce 3 nodes and 32 introduce 4 nodes ; thus the average number of extra nodes in this case is . for , there are 220608 functions and an exhaustive - search analysis shows that . for are approximately functions and an exhaustive - search analysis would be unfeasible. however , we have the following result . * theorem a.1 . * _ the average number of extra nodes for a unate function of variables is at most ; that is , . where is the binomial coefficient and is the floor function ._ without loss of generality we assume the cnf of the boolean function has no negative signs .let be the cnf , where has the form .for each , define .now , if there are , such that , then we can simplify to ( e.g. ) .that is , we can simplify the cnf so that for all .thus , is a family of subsets of such that no one is contained in the other .theorem states that .this implies that for any unate function in variables , we need at most extra nodes to obtain the and - not representation .therefore , .it is important to mention that the exhaustive - search analysis done for suggests that is actually much smaller than .in fact , we did a statistical analysis for using a total of 5000000 boolean functions chosen at random ( 1000000 for each ) .the analysis shows the following approximations : , , , , .table [ table : k ] shows a summary of our analysis for .for example , if , then the fractions of functions with 1 , 2 , 3 and 4 variables are on average , , and , respectively .then , the average number of extra nodes is : we prove theorem [ thm : strong ] and [ thm : badv ] . as mentioned in section[ sec - theory ] , theorem [ thm : strong ] is an application of ( * ? ? ?* theorem 3.2 ) to the family of and - not networks .first we need the following definition .let be a boolean network and consider .then , is the graph with vertices and the following edges : + if and , or if and ; + if and , or if and ; + where is the vector given by ( is the kronecker delta ) .notice that if or is an edge in , then changing the -th coordinate of produces a change in .notice that for and - not networks we have that for all ; in fact , this is true for more general networks .* theorem b.1. * _ let be a boolean network and suppose and are steady states of . then , there there exists such that has a positive feedback loop with vertices in the set . + _ we now prove theorem [ thm : strong ] .let defined by .we will show that if are steady states of , then .consider steady states of ; then , by theorem b.1 ., there exists such that has a positive feedback loop , , with vertices in the set .we claim that is a strong positive feedback loop of . by contradiction , suppose there is and such that and are edges in but not in .then , has edges of the form and where . on the other hand , since , we have that and .we have two cases or . in the case we obtain that for all values of .in particular , can not have an edge of the form with ; this is a contradiction . in the case we obtain that for all values of .in particular , can not have an edge of the form with ; this is a contradiction as well .therefore , is strong . since is a strong positive feedback loop in , must intersect . since has all its vertices in the set , intersects the set .therefore .it follows that the restriction of to the set of steady states is an injective function. therefore , .it is important to mention that theorem [ thm : strong ] was also proven in using different techniques .+ we now prove theorem [ thm : badv ] .let be an and - not network with wiring diagram .let be a positive feedback loop that is strong and inconsistent .then , there is a vertex such that there is a positive path of the form from to and a negative path of the form , from to such that , are not edges in and .let be the graph obtained by adding to all edges of the form and where does not intersect .denote by the and - not network associated to .we claim that the steady states of and are the same .we prove this by induction on the number of extra edges .suppose that and only differ in the edge , then , by definition we must also have a path .suppose that , we need to show that for all .since and only differ in the edge we have for , and .then , for .it remains to show that .consider first the case , then , and for some .if , we have that the edge is in and ; then , .if , then which implies that ( because of the edge ) ; similarly , we obtain that . then , .that is , .now consider the case .since , we have .a similar argument shows that if , then .the proof for when and only differ in the edge is analogous . by inductionwe obtain that and the and - not network obtained by a completion of have the same steady states .now , we claim that intersects all strong positive feedback loops of .let be a strong positive feedback loop of .then we have two cases : is in or it is not . consider the case .then , is a strong positive feedback loop in .if is consistent in , then it intersects . if is inconsistent ( and strong ) in , then it also intersects .now consider the case .then , at least one edge of is of the form or for some strong and inconsistent that does not intersect .then , and intersects . in any casewe obtain that intersects all strong positive feedback loops of .then , the number of steady states of , and hence , is at most .we first analyze the original bn using previous results . in , the authors showed that the positive feedback loops of the bn are : we will use the following two theorems ( proven in , respectively ) that give upper bounds on the number of steady states .[ thm : pfv2 ] let be the wiring diagram of a bn network and suppose is a set of vertices that intersects all positive feedback loops in . then , the number of steady states is at most .[ thm : fpfv ] let be the wiring diagram of a bn network and suppose is a set of vertices that intersects all functional positive feedback loops in . then , the number of steady states is at most .it is easy to see that all positive feedback loops intersect the set .therefore , theorem [ thm : pfv2 ] gives the upper bound .also , it is possible to show that the functional positive feedback loops are , , and ( e.g. using the ginsim software ) .therefore , theorem [ thm : fpfv ] gives the upper bound 8 as well .we now analyze the and - not network using our results .the positive feedback loops of the and - not network in figure [ fig : than ] are the following ( new nodes are in bold ) . those feedback loops that contain 4 and 13 are inconsistent because of the paths , ; they are also strong .all other positive feedback loops are consistent and intersect .that is , intersects all consistent positive feedback loops , and for each positive feedback loop that is inconsistent and strong , contains .hence , dominates the wiring diagram of .therefore , theorem [ thm : bnbound ] gives the better upper bound on the number of steady states of .the research was funded by nsf grants cmmi-0908201 and dms-1062878 .d. m. wittmann , c. marr , and f. j. theis , `` biologically meaningful update rules increase the critical connectivity of generalized kauffman networks , '' _ journal of theoretical biology _ , vol .266 , no . 3 , pp . 436 448 , 2010 .b. gummow , j. sheys , v. cancelli , and g. hammer , `` reciprocal regulation of a glucocorticoid receptor - steroidogenic factor-1 transcription complex on the dax-1 promoter by glucocorticoids and adrenocorticotropic hormone in the adrenal cortex , '' _ mol .endocrinology _ ,20 , no . 11 , pp . 27112723 , 2006 .m. merika and s. orkin , `` functional synergy and physical interactions of the erythroid transcription factor gata-1 with the krppel family proteins sp1 and eklf , '' _ mol ._ , vol . 15 , no . 5 , pp .24372447 , 1995 .n. du , b. wu , l. xu , b. wang , and p. xin , `` parallel algorithm for enumerating maximal cliques in complex network , '' in _ mining complex data _ ( d. zighed , s. tsumoto , z. ras , and h. hacid , eds . ) , vol .165 of _ studies in computational intelligence _ , pp .207221 , berlin / heidelberg : springer , 2009 .f. kuhn , t. moscibroda , t. nieberg , and r. wattenhofer , `` fast deterministic distributed maximal independent set computation on growth - bounded graphs , '' in _ distributed computing _ ( p. fraigniaud , ed . ) , vol .3724 of _ lecture notes in computer science _ , pp .273287 , berlin / heidelberg : springer , 2005 .k. makino and t. uno , `` new algorithms for enumerating all maximal cliques , '' in _ algorithm theory - swat 2004 _ ( t. hagerup and j. katajainen , eds . ) , vol .3111 of _ lecture notes in computer science _ , pp .260272 , berlin / heidelberg : springer , 2004 .m. schmidt , n. samatova , k. thomas , and b. park , `` a scalable , parallel algorithm for maximal clique enumeration , '' _ journal of parallel and distributed computing _ , vol .69 , no . 4 , pp .417 428 , 2009 .j. schneider and r. wattenhofer , `` a log - star distributed maximal independent set algorithm for growth - bounded graphs , '' in _ proceedings of the twenty - seventh acm symposium on principles of distributed computing _ , podc 08 , ( new york , ny , usa ) , pp . 3544 , acm , 2008 . l. wan , b. wu , n. du , q. ye , and p. chen , `` a new algorithm for enumerating all maximal cliques in complex network , '' in _ advanced data mining and applications _( x. li , o. zaiiane , and z. li , eds . ) , vol .4093 of _ lecture notes in computer science _ , pp .606617 , berlin / heidelberg : springer , 2006 .e. remy , p. ruet , l. mendoza , d. thieffry , and c. chaouiya , `` from logical regulatory graphs to standard petri nets : dynamical roles and functionality of feedback circuits , '' _ in transactions on computation systems biology vii ( tcsb ) _ , pp .5572 , 2006 .l. mendoza and i. xenarios , `` a method for the generation of standardized qualitative dynamical systems of regulatory networks , '' _ theoretical biology and medical modelling _, vol . 3 , no . 1 , p. 13, 2006 .o. sahin , h. frohlich , c. lobke , u. korf , s. burmester , m. majety , j. mattern , i. schupp , c. chaouiya , d. thieffry , a. poustka , s. wiemann , t. beissbarth , and d. arlt , `` modeling erbb receptor - regulated g1/s transition to find novel targets for de novo trastuzumab resistance , '' _ bmc systems biology _ , vol .3 , no . 1 , p. 1 , 2009 .s. klamt , j. saez - rodriguez , j. lindquist , l. simeoni , and e. gilles , `` a methodology for the structural and functional analysis of signaling and regulatory networks . , '' _ bmc bioinformatics _ ,vol . 7 , no . 56 , 2006 .e. remy , p. ruet , and d. thieffry , `` graphic requirements for multistability and attractive cycles in a boolean dynamical framework , '' _ advances in applied mathematics _41 , no . 3 , pp . 335 350 , 2008 .a. gonzalez , a. naldi , l. snchez , d.thieffry , and c. chaouiya , `` ginsim : a software suite for the qualitative modelling , simulation and analysis of regulatory networks , '' _biosystems _ , vol .84 , no . 2 , pp .91100 , 2006 .
|
finite dynamical systems ( e.g. boolean networks and logical models ) have been used in modeling biological systems to focus attention on the qualitative features of the system , such as the wiring diagram . since the analysis of such systems is hard , it is necessary to focus on subclasses that have the properties of being general enough for modeling and simple enough for theoretical analysis . in this paper we propose the class of and - not networks for modeling biological systems and show that it provides several advantages . some of the advantages include : any finite dynamical system can be written as an and - not network with similar dynamical properties . there is a one - to - one correspondence between and - not networks , their wiring diagrams , and their dynamics . results about and - not networks can be stated at the wiring diagram level without losing any information . results about and - not networks are applicable to any boolean network . we apply our results to a boolean model of th - cell differentiation .
|
localized modes on nonlinear lattices have been a topic of wide theoretical and experimental investigation in a wide range of areas over the past two decades .this can be seen , e.g. , in the recent general review , as well as inferred from the topical reviews in nonlinear optics , atomic physics and biophysics where relevant discussions have been given of the theory and corresponding applications .one of the areas in which the theoretical analysis has been especially successful in describing experimental data and providing insights has been that of granular crystals .these consist of closely - packed chains of elastically interacting particles , typically according to the so - called hertz contact law .the broad interest in this area has emerged due to the wealth of available material types / sizes ( for which the hertzian interactions are applicable ) and the ability to tune the dynamic response of the crystals to encompass linear , weakly nonlinear , and strongly nonlinear regimes .this type of flexibility renders these crystals perfect candidates for many engeenering applications , including shock and energy absorbing layers , actuating devices , and sound scramblers .it should also be noted that another aspect of such systems that is of particular appeal is their potential ( and controllable ) heterogeneity which gives rise to the potential not only for modified solitary wave excitations , but also for discrete breather ones .another motivation for looking at waves in such lattices stems from fpu type problems . in the prototypical fpu context, it has been rigorously proved that traveling waves exist which can be controllably approximated ( in the appropriate weakly supersonic limit ) by solitary waves of the korteweg - de vries equation . however , in more strongly nonlinear regimes , compact - like excitations have been argued to exist ( see also for breather type excitations ) and have even been computed numerically through iterative schemes , but have not been rigorously proved to exist in the general case . in the work of ,the special hertzian case was adapted appropriately to fit the assumptions of the variational - methods based proof of the traveling wave existence theorem of in order to establish these solutions .however the proof does not give information on the wave profile .our aim herein is to provide a reformulation and illustration of existence of `` bell - shaped '' traveling waves in generalized hertzian lattices .our work is based on the iterative schemes that have been previously presented in for the computation of traveling waves in such chains of the form : +^p - [ v_n - v_{n+1}]_+^p . \label{eqn1}\end{aligned}\ ] ] here denotes the displacement of the n - th bead from its equilibrium position .the special case of hertzian contacts is for , but we consider here the general case of nonlinear interactions with . notice that the `` + '' subscript in the equations indicates that that the quantity in the bracket is only evaluated if positive , while it is set to , if negative ( reflecting in the latter case the absence of contact between the beads ) .the construction of the traveling waves and the derivation of their monotonicity properties will be based on the strain variant of the equation for such that : +^p - 2 [ u_n]_+^p + [ u_{n-1}]_+^p , \label{eqn2}\end{aligned}\ ] ] our presentation will proceed as follows . in section 2, we will give a preliminary mathematical formulation to the problem , briefly illustrate its numerical solution and some of its consequences .then , we will proceed in section 3 to state and prove our main result .some technical aspects of the problem will be relegated to the appendices of section 4 .when seeking traveling wave solutions of the form , we are led to the advance - delay equation ( setting ) where is a smooth and positive function , with the desired monotonicity involving decay in and increase in .we introduce the fourier transform and its inverse via as is well - known , the second derivative operator has a simple representation via the fourier transform , namely for every , we may define and the sobolev spaces via we will also consider the operator on the space of functions . using fourier transform , we may write in other words , is given by the symbol , that is we may rewrite the equation in the form (x)\ ] ] taking fourier transform on both sides of this ( and using ) , allows us to write + or equivalently , taking , .\label{numerics}\end{aligned}\ ] ] in other words , we have introduced the convolution operator with kernel .it is easy to compute that or note that we have the following formula for the convolution for reasons of completeness and in order to appreciate the form of ( suitably normalized ) solutions of eq .( [ numerics ] ) , in fig .[ fig1 ] , we used this equation as a numerical scheme and proceed to iterate it until convergence .the figure illustrates the converged profile of the solution and its corresponding momentum ( for ) .the results of these computations are shown for different values of ( in order to yield a sense of the of the solution , namely for ( the hertzian case ) , and ( the fpu - motivated cases , in that they are the purely nonlinear analogs of - and -fpu respectively ) and finally ( as a large- case representative ) .the figure shows the solutions profile and corresponding momenta , as well as the semi - logarithmic form of the profile , so as to clearly illustrate the doubly exponential nature of the decay ( see below ) .notice that as increases , the decay becomes increasingly steeper. ) solution profile of the iterative scheme , as renormalized for use in eq .( [ eqn2 ] ) .the solid ( blue ) line illustrates the spatial form of the solution and the dashed ( red ) line the corresponding momentum ( for speed ) .the circles and stars denote respectively the ordinates of the lattice nodes ( extracted for use in eq .( [ eqn2 ] ) .the inset illustrates the profile in a semilog to highlight the doubly exponential nature of the decay ( notice also the steepening as increases ) .the top left panel is for , the top right for , the bottom left for and finally the bottom right for . , title="fig:",width=226,height=226 ] ) solution profile of the iterative scheme , as renormalized for use in eq .( [ eqn2 ] ) .the solid ( blue ) line illustrates the spatial form of the solution and the dashed ( red ) line the corresponding momentum ( for speed ) .the circles and stars denote respectively the ordinates of the lattice nodes ( extracted for use in eq .( [ eqn2 ] ) .the inset illustrates the profile in a semilog to highlight the doubly exponential nature of the decay ( notice also the steepening as increases ) .the top left panel is for , the top right for , the bottom left for and finally the bottom right for . , title="fig:",width=226,height=226 ] ) solution profile of the iterative scheme , as renormalized for use in eq .( [ eqn2 ] ) .the solid ( blue ) line illustrates the spatial form of the solution and the dashed ( red ) line the corresponding momentum ( for speed ) .the circles and stars denote respectively the ordinates of the lattice nodes ( extracted for use in eq .( [ eqn2 ] ) .the inset illustrates the profile in a semilog to highlight the doubly exponential nature of the decay ( notice also the steepening as increases ) .the top left panel is for , the top right for , the bottom left for and finally the bottom right for ., title="fig:",width=226,height=226 ] ) solution profile of the iterative scheme , as renormalized for use in eq .( [ eqn2 ] ) .the solid ( blue ) line illustrates the spatial form of the solution and the dashed ( red ) line the corresponding momentum ( for speed ) .the circles and stars denote respectively the ordinates of the lattice nodes ( extracted for use in eq .( [ eqn2 ] ) .the inset illustrates the profile in a semilog to highlight the doubly exponential nature of the decay ( notice also the steepening as increases ) .the top left panel is for , the top right for , the bottom left for and finally the bottom right for ., title="fig:",width=226,height=226 ] to corroborate the exact nature of such traveling wave solutions , once the solution was obtained , then the `` lattice ordinates '' of both the solution and its time derivative were extracted and inserted as initial conditions for the dynamical evolution of eq .( [ eqn2 ] ) .the results of the relevant time integration ( using an explicit fourth - order runge - kutta scheme ) are shown in fig .it can be straightforwardly observed that excellent agreement is obtained with the expectation of a genuinely traveling ( without radiation ) solution with a speed of , so that its center of mass moves according to ( the solid line in the figure ) .this confirms the usefulness of the method ( independently of the nonlinearity exponent , as long as ) in producing accurate traveling solutions for this dynamical system . ) , with and as seeded from the iteration scheme s convergent profile .the solid line in each case illustrates the trajectory of ( for ) to which the solutions correspond .one can notice for all values of ( : top left ; : top right ; bottom left and bottom right ) the agreement with the expectation of a genuinely traveling ( non - radiating ) waveform of .,title="fig:",width=226,height=226 ] ) , with and as seeded from the iteration scheme s convergent profile .the solid line in each case illustrates the trajectory of ( for ) to which the solutions correspond .one can notice for all values of ( : top left ; : top right ; bottom left and bottom right ) the agreement with the expectation of a genuinely traveling ( non - radiating ) waveform of .,title="fig:",width=226,height=226 ] ) , with and as seeded from the iteration scheme s convergent profile .the solid line in each case illustrates the trajectory of ( for ) to which the solutions correspond .one can notice for all values of ( : top left ; : top right ; bottom left and bottom right ) the agreement with the expectation of a genuinely traveling ( non - radiating ) waveform of .,title="fig:",width=226,height=226 ] ) , with and as seeded from the iteration scheme s convergent profile .the solid line in each case illustrates the trajectory of ( for ) to which the solutions correspond .one can notice for all values of ( : top left ; : top right ; bottom left and bottom right ) the agreement with the expectation of a genuinely traveling ( non - radiating ) waveform of .,title="fig:",width=226,height=226 ] if the convergence to such a nontrivial profile is established ( as we will establish it in section 3 with the proper monotonicity properties based on our modified variational formulation ) , there is an important immediate conclusion about the decay properties of such a profile .in particular , hence , as was originally discussed in and then more rigorously considered in ( see also ) , the solutions of fig .[ fig1 ] feature a doubly exponential decay .this very fast decay ( and nearly compact shape ) of the pulses can be clearly discerned in the semi - logarithmic plots of the figure . as a slightaside to the present considerations , we should mention that a physically relevant variant of the problem consists of the presence of a finite precompression force at the end of the chain . in that case , the model of interest becomes ( in the strain formulation and with ) +^p - 2 [ \delta_0 + u_n]_+^p + [ \delta_0 + u_{n-1}]_+^p , \label{eqn2_mod } \end{aligned}\ ] ] the case of constitutes the so - called sonic vacuum , while that of finite features a finite speed of sound ( and allows the existence and propagation of linear spectrum excitations ) .it is worthy then to notice that for , the above decay estimate is modified as : namely , the solutions are no longer doubly exponentially localized but rather feature an exponential tail ( and are progressively closer to regular solitary waves ) .this can be thought of as a `` compacton to soliton '' transition that is worth exploring further ( although the case of will not be considered further herein ) .we now turn to several definitions , which will be useful in the sequel .our space of test functions will be the following . for - an open set , let be the set of all functions with compact support , contained inside .we equip this with the usual topology of a frechet space , generated by a family of seminorms , where is some fixed nested family of compact sets , so that .the distributions over this space of functions , which we denote by , is its dual space , namely all continuous linear functionals over .the derivatives of such distributions are defined in the usual way .one may also define a convolution of a distribution with a given function , by .we say that two distributions are in the relation in sense , if for all , we have .in particular , [ defi:1 ] we say that a distribution is non - increasing ( non - decreasing ) over a set , if ( ) in sense . of course , if happens to have a locally integrable derivative on an interval , then the notion of non - increasing function coincides with the standard ( pointwise ) notion by the fundamental theorem of calculus .more generally , we have the following [ le : p ] suppose that is a locally integrable function in and it satisfies ( ) in sense .then , is almost everywhere ( a.e . )non - decreasing ( non - increasing , respectively ) function on .that is , for almost all pairs , ( respectively ) .it is well - known by the lebesgue differentiation theorem that for a locally integrable function , one has all such points are called lebesgue points for . denote this full measure set by .we will show that for all , we have .indeed , let be so that .define a function , clearly is not smooth , but is continuous and it can be approximated well by test functions .moreover on , on and zero otherwise . since ( and these are well - defined quantities ) , we obtain this is true for all , which are sufficiently small . thus , dividing by and taking limit as ( and taking into account that both are lebesgue points ) , we conclude that .we find the following trick useful , which allows us to reduce non - increasing / non - decreasing distrubutions to non - increasing / non - decreasing functions .more precisely , let us fix a positive even function , so that and .let and define .the following lemma has a standard proof .[ le : kl ] let be a non - increasing ( non - decreasing ) distribution . then for every , is a function , which is non - increasing ( non - decreasing respectively ) .moreover in the sense of distributions .next [ defi:4 ] we say that a , if is non - decreasing in and is non - increasing in . in the sequel , we need the following technical result . [ le:3 ] suppose that is an even distribution , so that is non - increasing in and non - decreasing in .then is non - increasing in and non - decreasing in .assume that is non - increasing in for some .then is non - increasing in .assume that is non - decreasing in .then , is non - decreasing in . by lemma [ le : kl ] , it suffices to consider functions instead of ditributions with the said properties .we have the following computation for the derivative of the function which follows by differentiating .it is immediate from that the claims for the functions and hold true .regarding , it is clear that is non - increasing in and non - decreasing in .thus , we need to show that for , and for , . we only verify this for , since the other inequality follows in a similar manner . indeed , using , since is non - increasing in , back to the expression for , this implies , which was the claim .we will also need the following multiplier in our considerations it is actually easy to see that since }}(\xi)= { \frac{\sin(\pi \xi)}{\pi \xi}} ] , there is with in sense .equivalently , we may define define and finally .due to our requirement for bell - shaped test functions in , we need to impose extra restrictions on the function . to that end , note that is an open set and fix an interval \subset { \omega} ] .our goal will be to show that such . to that end, we will first assume that they do exist and then , we will be able to derive an euler - lagrange equation on them , which will then lead to a contradiction .take such an interval , say \subset { \omega}^c ] sense , i.e. is constant on connected components of . to prove that, assume the opposite , namely that is not a constant in .thus ( since we know ) , there exists a test function , so that .for some small , the set will be nonempty and open .take an interval , so that .take an arbitrary function , so that for .it follows that and hence .also , whence for all , with .one can now extend to hold for all .hence , in contradiction with . now that we have established that is a.e .constant on any non - trivial interval , we will derive the euler - lagrange equation for on it .we will consider first the case , the other case will be considered separately .+ * case i : * + to fix the ideas , we consider first the case when the interval ] is not isolated from ( i.e. there is no , so that ) is treated in the same way .indeed , this was needed only in the very last step , in the construction of the function . but clearly , one can carry out a similar construction of , if one has a sequence of intervals , so that . similarly , for the proof of , one needs a sequence of intervals , so that . finally , it remains to observe that every non - trivial interval of is contained in \subseteq { \omega}^c$ ] with the property and hence , we can carry the constructions of and hence the validity of and follows .we then derive on every such interval . + * case ii : * + this case is similar to the previous one .we again assume that there exists , so that , the general case being reduced to this one by arguments similar to those in the case .again , is even , we have that on the interval and is non - increasing in .we will show that for every , we have let us first prove that assuming and , one must have .indeed , if we apply for and for , we see that . again , if we assume that for all , we again conclude by the continuity of that .if one has for some , we have that ( since ) and hence , can not be non - increasing function ( as in case i ) , a contradiction .thus in .thus , it remains to show and .fix and let . for , select and is , even and increasing in and decreasing in .clearly , for , is admissible for , for some small . thus , applying for this and passing to appropriate limits as , we obtain , as above for , we construct to be an even function , so that now , we require that is decreasing in and it is increasing from .note that is still acceptable as a perturbation - in the sense that is non - increasing in for all small enough ( for this , recall that and hence , there exists , so that in sense ) .we have again thus , we have established and and thus the euler - lagrange equation we will now show that if an equation like holds in a non - trivial interval , say and is a bell - shaped , locally integrable function , which is constant on , then a.e . on ( which would be a contradiction ) . indeed , is in fact a differentiable function on , which is non - increasing on , according to lemma [ le:3 ] .thus , taking a derivative of ( and taking into account that on ) leads to for all .if , we see that since is non - increasing in ( by lemma [ le : p ] ) , it follows that is non - increasing and continuous . by, it follows that for .hence , we must have a.e . in every interval in the form ( for all ) . by iterating this argument in all , a contradiction .if , we can again argue as in lemma [ le:3 ] to establish that again in .thus , we can not have non - trivial intervals and hence consists of isolated points only . before deriving the euler - lagrange equation forthe maximizer , let us recapitulate what we have shown so far for .we managed to show that is a dense open set , so that consists of isolated points only .finally , on , is a continuous function , and the equation holds on every interval .we will now show that holds for almost all .first of all , recall that + is locally integrable function ( as a sum of an function and functions ) and hence , almost all points are lebesgue points for it .let , so that is a lebesgue point for .we have shown that is an isolated point of , which implies the existence of intervals inside , which approximate .that is , there are , , so that and .in addition , we can clearly select these intervals to be very short , namely we require .construct now a sequence of even test functions , given by where is strictly increasing in and strictly decreasing in .we have already shown that functions of the form will be non - increasing in and it will otherwise satisfy all the restrictions of the optimization problem , provided .thus , accoridng to , we have and and hence dividing both sides by , and taking as ( noting that is a lebesgue point ) , we get =0.\ ] ] thus , we need to show that the limit above exists and it is equal to zero . to that end , note that and since is non - increasing a.e . in . the last inequality , combined with shows that implies .hence , for all lebesgue points of , we have .thus , from , which is satisfied and also in sense . from it , we learn that is and consequently , by iterating this argument , function .recall also that by construction and there exists , so that , see .we will now take several consecutive subsequences of , in order to ensure that the limit satisfies . first , take , so that .second , out of this constructed sequence , take a subsequence , say , so that in a weak sense , for some .this is possible , by the sequential compactness of the unit ball in the weak topology ] . by the uniqueness of weak limits ( by eventually taking further subsequence ) ,we also get in weak sense . we also have in the weak topology , since for every test function , we have by the self - adjointness of and , thirdly , we show that the limiting function is non - zero . to that end, it will suffice to establish that assuming that is false , we will reach a contradiction . indeed , let be a sequence so that .thus , we now use a refined version of the gagliardo - nirenberg estimate that we have used before .\|_{l^2}^2 \end{aligned}\ ] ] and hence \|_{l^2 } \leq \|q[v_{{\delta}_j}\chi_{(-{\delta}_j^{-1}-1 , { \delta}_j^{-1}+1)}]\|_{\dot{w}^{1,q}}^{1/q-1/2 } \|q [ v_{{\delta}_j}\chi_{(-{\delta}_j^{-1}-1 , { \delta}_j^{-1}+1)}]]\|_{l^q}^{3/2 - 1/q}. \end{aligned}\ ] ] while a simple differentiation shows that on one hand , \|_{\dot{w}^{1,q}}\leq 2 \|v_{{\delta}_j}\|_{l^q}=2,\ ] ] we also have by cauchy - schwartz \|_{l^q}^q & \leq & \int_{-\infty}^{\infty } ( \int_{x-1/2}^{x+1/2 } v_{{\delta}_j}(y)\chi_{(-{\delta}_j^{-1}-1 , { \delta}_j^{-1}+1)}(y ) dy)^q dx \leq \\ & \leq & \int_{-{\delta}_j^{-1}-1}^{{\delta}_j^{-1}+1 } v_{{\delta}_j}^q ( y ) dy \leq 2 \|v_{{\delta}_j}\|_{l^q(-{\delta}_j^{-1 } , { \delta}_j^{-1})}^q \to 0 . \end{aligned}\ ] ] the last inequality here follows by the fact that is non - increasing in and therefore , if , which we have assumed anyway .thus , we will have proved that which is in contradiction with .thus , we have established .we are now ready to take a limit as in .indeed , take for .fix a test function .there exists , so that for , . thus , for , we get ] take a limit as . by our constructions, we have that , since , by the weak convergence .we also have .thus , we have established the desired identity valid for all . by the symmetry ,it is also valid for .it is clear that is now infinitely smooth , it is easy to conclude that is smooth , which in turn implies that etc . ]function on .recall though , that for theorem [ theo:1 ] we needed to solve .one can easily construct , based on the solution of .more precisely , if we take , then will satisfy .theorem [ theo:1 ] is proved .we quickly indicate how our ideas can be turned into a new proof of the friesecke - wattis theorem .specifically , as we saw in the previous section , it is clear that if we are just interested in the existence of traveling waves for ( but not in bell - shaped solutions ) , it is a good idea to consider following constrained maximization problem ( compare to ) first off , the arguments in section [ sec:3.1 ] apply unchanged ( by just skipping the bell - shapedness of ) to prove that has a maximizer , say . following the argument of section [ sec:3.2 ] andmore specifically , the following identities which were established there , it follows that for all test functions , due to the nature of ] .it follows that satisfies this of course produces a family , which easily can be shown to converge ] ( after an eventual subsequence ) to a , which solves setting again provides a solution to as is required by .porter , c. daraio , e.b .herbold , i. szelengowicz and p.g .kevrekidis , phys .e * 77 * , 015601 ( 2008 ) ; m.a .porter , c. daraio , i. szelengowicz , e.b .herbold and p.g .kevrekidis , phys .d * 238 * , 666 ( 2009 ) .n. boechler , g. theocharis , s. job , p. g. kevrekidis , m.a .porter , and c. daraio phys .* 104 * , 244302 ( 2010 ) ; g. theocharis , n. boechler , p.g .kevrekidis , s. job , m.a .porter , and c. daraio phys .e * 82 * , 056604 ( 2010 ) .
|
we consider the question of existence of `` bell - shaped '' ( i.e. non - increasing for and non - decreasing for ) traveling waves for the strain variable of the generalized hertzian model describing , in the special case of a exponent , the dynamics of a granular chain . the proof of existence of such waves is based on the english and pego [ proceedings of the ams * 133 * , 1763 ( 2005 ) ] formulation of the problem . more specifically , we construct an appropriate energy functional , for which we show that the constrained minimization problem over bell - shaped entries has a solution . we also provide an alternative proof of the friesecke - wattis result [ comm . math . phys * 161 * , 394 ( 1994 ) ] , by using the same approach ( but where the minimization is not constrained over bell - shaped curves ) . we briefly discuss and illustrate numerically the implications on the doubly exponential decay properties of the waves , as well as touch upon the modifications of these properties in the presence of a finite precompression force in the model .
|
being an inherent feature of a human , laziness was always that force which engendered invention of new devices , supposed to work instead of people . and in our epoch of vast technological progress , there are thousands of useful gadgets already existing .though recently the science is advancing with seven - league strides , there are still a great number of phenomena which are waiting for a better insight .one has to admit that none of the existing complex machines and powerful computers can substitute a single human brain . which means that we still do not draw close enough to clearing up a mystery of how this accumulation of grey matter really works . since the end of the last century, study of neural networks picks up speed . in order to describe its intricate behavior ,the brain is often represented as an ensemble of coupled nonlinear dynamical elements , capable of producing spikes and exchanging information between each other .such neural populations are usually spatially localized and contain both excitatory and inhibitory neurons .some researchers , starting from the simplest case of two interconnected neurons , show how more complicated dynamics emerges in larger sets .the others explore extremely complex network of subnetworks , focusing on the hierarchically clustered organization of interacting excitable elements .most studies base on the present oscillatory behavior of individual system elements , which then produces observable patterns due to collective synchronization .thus , for modeling a single neuron , phase oscillators are often used .for instance , to characterize mutual dynamics of cells in certain brain areas , responsible for giving the onset to parkinson s disease or epilepsy , a well - known kuramoto model is considered . here , we rely on the works by fitzhugh and nagumo _ et al._ who have shown that for describing the main characteristics of a neuron dynamics , it is sufficient to consider a 2-dimensional system .the latter is also widely used nowadays as one of the simplest models for examining brain dynamics and has been essentially studied in many papers ( see , for instance , and references therein ) .having an intention to move from simple to complex , we consider below a set of equations consisting only of two identical fitzhugh - nagumo subsystems ( see also ) .their interaction is described by a linear coupling term which includes delays ( and ) , accounted for the fact that the signal transmission between neurons is not instantaneous : here and are the phase coordinates for the first and the second subsystem respectively .the parameter determines whether the individual neuron is in the excitable regime or exhibits self - sustained periodic firing .the time scale parameter is chosen during the numerical simulations to be , which results in fast activator variables , and slow inhibitor variables , .for further simplicity , the coupling strength is also taken symmetric .as it was already mentioned , the dynamics of an isolated 2-dimensional fitzhugh - nagumo system is already well - investigated .its single fixed point is stable for and exhibits a supercritical hopf bifurcation when the excitability parameter crosses unity , which implies periodic spiking for .provided that , the system is excitable , namely , if a sufficient external impulse is added , it emits a spike and then rests again in the state . for our numerical simulations ,we take , so that the individual subsystems are in the excitable regime .the coupling term of the considered form is canceled for a fixed point orbit , thus , the 4-dimensional equilibrium , being existent for the uncoupled system , persists as well for the eq .( [ eq : fhn ] ) . changing the coupling strength or the delaysalso does not influence its stability , as it was recently shown .however , besides the stable fixed point solution , the system ( [ eq : fhn ] ) can also produce periodic oscillations .intuitively , this phenomenon can be explained as follows .one can perturb , for instance , the first neuron , so that it emits a spike . then , with the delay this perturbation reaches the second neuron , which provokes it to spike as well . again with the delay the second neuron `` informs '' the first one that it has been stimulated , which causes a new run of the cycle , and the process repeats ( see schematic representation in the fig [ fig : sch_lng](a ) ) .though , in the numerical simulations , starting from various initial conditions , we observed periodic solutions of _ two _ different types , which are referred to in the following as a `` long '' and a `` short '' cycle respectively .the former is of the period , while the latter has the period .+ again , intuitively , to obtain this second solution one would add an initial impulse not to one , but to both neurons , then roughly the short cycle dynamics can be plotted as in the fig .[ fig : sch_lng](b ) .one could remark that , in this case , the initial perturbation for the second neuron should arrive before the delayed signal of the first one , namely for . although there are infinitely many variations for choosing the time moment for the second impulse , in our numerical simulations we were able to observe only that pattern , which is depicted in the fig .[ fig : sch_lng](b ) . in the fig .[ fig : difcyc ] , we plot the phase portraits and the data series for these two attractors for certain fixed parameter values .the next point to investigate in connection with the periodic firing patterns obtained , is a question whether these solutions exist for all couplings .is their stability region large enough or such solutions appear only for separate parameter values ? as it was already noticed in , such oscillations appear through a saddle - node bifurcation of limit cycles , creating a pair of a stable and an unstable periodic orbit . in the fig .[ fig : long_bc](a ) , the bifurcation curves of this attractor type are plotted in the -plane , for , and .it is easy to conclude , that with increasing the bifurcation curve moves to the left , closer to the wall value .+ this implies that for some large enough coupling periodic firing still exists even if one of the delays is close to zero . for comparison , in the fig .[ fig : long_bc](b ) , the bifurcation curve for the case is present . when overlaying the two graphs of ( a ) and ( b ) ( fig .[ fig : long_bc](c ) ) , one can notice that the critical coupling value ( indicated by a vertical dashed line ) does not depend on the delay times difference , but only on their sum ( see appendix ) . in support to this last statement, we depict in the fig .[ fig : long_comp](a , b ) phase portraits and time series for three different periodic solutions , namely , and , while the sum of delays is always 4 and the coupling strength .as it could be clearly seen , the phase trajectories coincide perfectly as well as the time series .we also would like to examine the question how the cycle period is related to the coupling terms .the fig .[ fig : long_prd_vs](a ) represents several plots of the orbit period vs. , while and . in the fig .[ fig : long_prd_vs](b ) , dependence of the period on is depicted ( is the same as in ( a ) , and is chosen so that the sum of delays does not change ) . as it is expected ( cf . ) , increases linearly with .however , it decays with increasing . for the short cycle, the situation is almost the same .again it is born through a saddle - node bifurcation . + in the fig .[ fig : short_bc](a ) , we also plot the bifurcation curves , separating the regions of existence and absence of the short cycle , in the -plane ( as earlier , and ) . again with increasing bifurcation curve moves to the left , however , in comparison with the long cycle the short one occurs for larger values of coupling strength . and after laying over the curve for the case of equal delays ( fig .[ fig : short_bc](b ) ) , one can notice that the critical depends only on the delays sum ( see fig .[ fig : short_bc](c ) ) .the phase portraits and the time series for three different periodic solutions , plotted in the fig .[ fig : short_comp ] , coincide as well . finally , in the fig .[ fig : short_prd_vs](a),(b ) , the graphs disclosing the relation between the period and the coupling term configuration are presented . as in the case of the long cycle, is a linear function of and has a gradual decrease on .in the present paper we have considered two asymmetrically delay - coupled fitzhugh - nagumo systems for modelling interacting excitable neural elements .such an `` intrusion '' gives rise to the regular spiking in the system investigated .for sufficiently large coupling strength and delays , one can observe periodic solutions of two different types ( long and short cycles ) , depending on whether only one subsystem is perturbed initially or both .the long cycle period approximately equals , while the short one has a period of about a half of this amount .furthermore , the numerical simulation , as well as the mathematical anlysis , shows that phase portraits and time series of these solutions do not depend on the difference of delays , but only on their sum .support from dfg in the framework of sfb 555 is acknowledged .the authors would like to thank g. hiller , p. hvel and v. zykov for fruitful discussions and remarks .consider the general system without losing generality assume that and denote and , so that and . then introducing a new function we use in eq .( [ eq : gensys1 ] ) and rewrite the equation ( [ eq : gensys2 ] ) as follows which leads to this corresponds to a system with symmetric delay coupling , and the function fully coincides with the function of the initial problem , but with a _ shift _ along the time axis by . we also note that the inhibitor variables , of the system ( [ eq : fhn ] ) depend only on and , respectively .therefore , omitting them in the above analysis does not influence the resulting conclusion .
|
we study two delay - coupled fitzhugh - nagumo systems , introducing a mismatch between the delay times , as the simplest representation of interacting neurons . we demonstrate that the presence of delays can cause periodic oscillations which coexist with a stable fixed point . periodic solutions observed are of two types , which we refer to as a `` long '' and a `` short '' cycle , respectively .
|
the largest eigenvalue of network adjacency matrix appears in many applications . in particular , for dynamic processes on graphs , the inverse of the largest eigenvalue characterizes the threshold of phase transition in both virus spread and synchronization of coupled oscillators in networks . in neuroscience, networks of neurons are often studied using models in which interconnections are represented by a synaptic matrix with elements drawn randomly .eigenvalues of these matrices are useful for studying spontaneous activities and evoked responses in such models , and the existence of spontaneous activity depends on whether the real part of any eigenvalue is large enough to destabilize the silent state in a linear analysis .furthermore , the largest real part of the spectra provides strong clues about the nature of spontaneous activity in nonlinear models .a recent work reveals the importance of the largest eigenvalue in determining disease spread in complex networks , where epidemic threshold relates with inverse of the largest eigenvalue . in context of ecological systems ,a celebrated work by robert may demonstrates that largest real part of eigenvalue of corresponding adjacency matrix contains information about stability of underlying system .mathematically , matrices satisfying a set of constraints are stable .but most real world systems have underlying interaction matrix which are too complicated to satisfy these constraints and hence , study of fluctuations in largest real part of eigenvalues is crucial to understand stability of a system , as well as of an individual network in that ensemble .largest eigenvalues over ensembles of random hermitian matrices yielding correlated eigenvalues follow tracy - widom distribution . whereas , extreme value statistics for independent identically distributed random variables can be formulated entirely in terms of three universal types of probability functions : the frchet , gumbel and weibull known as generalized extreme value ( gev ) statistics depending upon whether the tail of the density is respectively power - law , faster than any power - law , and bounded or unbounded .the gev statistics with location parameter , scale parameter and shape parameter has often been used to model unnormalized data from a given system .probability density function for extreme value statistics with these parameters is given by \exp\big[-\big(1+\big(\xi\frac{(x-\mu)}{\sigma}\big)^{-\frac{1}{\xi}}\big)\big]&\\ \hspace*{5.6cm}\mbox{if } \xi\not=0 \\ \frac{1}{\sigma}\exp\big(-\frac{x-\mu}{\sigma}\big)\exp\big[-\exp\big(-\frac{x-\mu}{\sigma}\big)\big ] ~~ \mbox{if } \xi=0 \end{cases } \label{eq_gev}\ ] ] distributions associated with , and are characterized by frchet , gumbel , and weibull distribution respectively .extreme statistics characterizes rare events of either unusually high or low intensity .recent years have witnessed a spurt in activities on gev statistics , observed in a wide range of systems from quantum dynamics , stock market crashes , natural disaster to galaxy distribution .these distributions have been successful in describing the frequency of occurrence of extreme events .the experimental examples of gev distributions include power consumption of a turbulent flow , roughness of voltage fluctuations in a resistor flow , orientation fluctuations in a liquid crystal , plasma density fluctuations in a tokamak .furthermore , eigenvalues of a non - hermitian random matrix with all entries independent , mean zero and variance , lie uniformly within a unit circle in complex plane .limiting behavior of spectral radius of non - hermitian random matrices has been perceived to lie outside the unit disk as , and with proper scaling and shifting , has been found to comply with gumbel distribution . though a lot has been discussed about largest eigenvalues of random matrices or matrices representing properties of above systems , same for adjacency matrices of networks has not been done . a vast literature available on network spectra is mostly confined to the distribution of eigenvalues and lower - upper bounds for largest eigenvalue , etc .few available results on the statistics of largest real part of network eigenvalues ( ) under the gev framework convey that ensemble distribution of inverse of for scale - free networks converges to weibull distribution .sparse random graphs having nodes and connection probability pertains to a normal distribution with mean and variance . in the context of brain networks ,largest eigenvalues of gain matrices , constructed to analyze stability of underlying brain networks , follow normal distribution .networks considered in this paper are motivated by inhibitory ( i ) and excitatory ( e ) couplings in brain networks , entailing matrices with and entries .these matrices are different from non - hermitian random matrices studied using random matrix theory framework .the entries in matrix for former case take values and instead of gaussian distributed random numbers . we investigate dependence of on various properties of underlying network , particularly on the ratio of i - e couplings .we find that exhibits a rich behavior as underlying network becomes more complex in terms of change in i couplings . at a certain i to eratio , distribution manifests a transition to the gev statistics , which is accompanied by another transition from weibull to frchet distribution as network becomes denser . for various average degree and from bottom to top .left panel is for , middle for and right panel for . ] after constructing an erds - reni random network with network size and connection probability with a corresponding adjacency matrix ( ) having entries 0 and 1 , i connections are introduced with a probability as follows . a node is randomly selected as i with the probability and all connections arising from such nodes yield entry into corresponding matrix . for , which assimilates the correlation , network is undirected with being symmetric entailing all real eigenvalues .maximum eigenvalue for this network scales as , where quantity is referred to as average degree of the network . upon introduction of directionality ,complex eigenvalues start appearing in conjugate pairs , and for , bulk of the eigenvalues is distributed in a circular region of radius .note that for a random network with entries 1 and , the radius of circular bulk region scales with square root of the average degree of the network i.e. , and all eigenvalues including the largest lie within the bulk .we investigate of random networks as a function of . fig .[ fig_n50_n100_n500 ] elucidates that , as directionality is introduced in terms of i couplings , the mean of decreases linearly up to a certain threshold value , with subsequent decrease in a nonlinear fashion without any known functional form in terms of network parameters . for the linear regime , fitting with a straight line yields the following relation between and : at different values of i coupling probability .the histograms are numerical results , solid and dashed lines correspond to the normal and the gev fit respectively . for each case ,size of the network is and connection probability which leads to the average degree .( a ) , ( b ) , ( c ) , ( d ) , ( e ) , ( f ) , ( g ) and ( h ) correspond to being 0 , 0.1 , 0.3 , 0.4 , 0.42 , 0.46 , 0.48 and 0.50 respectively . , height=207 ] statistics for network parameters and entailing average degree .( a ) , ( b ) , ( c ) , ( d ) , ( e ) , ( f ) , ( g ) and ( h ) correspond to = 0.00 , 0.10 , 0.30 , 0.40 , 0.42 , 0.46 , 0.48 and 0.50 respectively .the histograms are numerical results , solid and dashed lines are obtained after fitting the data with the normal and the gev distributions respectively.,height=207 ] and which corresponds to average degree .subfigures ( a ) , ( b ) , ( c ) , ( d ) , ( e ) , ( f ) , ( g ) and ( h ) correspond to being 0 , 0.1 , 0.3 , 0.4 , 0.42 , 0.46 , 0.48 and 0.5 respectively .the histograms represent numerical result , solid and dashed lines are obtained by fitting the date with the normal and the gev distributions respectively.,height=207 ] fig .[ fig_stat_n100_avg6 ] depicts statistics for largest real part of eigenvalue for and average degree .the curve is fitted with the gev distribution from eq .[ eq_gev ] . for , nature of distribution is normal , however , as reflected by the left panel of fig .[ fig_n50_n100_n500 ] , the mean decreases in agreement to the equation eq .[ eq_mean ] .variances of the data as well as of fitted curves increase with a faster rate for after which there is a fall in its rate of increment .the variance achieves a peak at , and then decreases with a slower rate .the behavior of largest eigenvalue statistics is more complex in the range , where it can be modeled using extreme value statistics .[ fig_stat_n100_avg6](e)-(h ) and negative values of the parameter indicate that statistics converge to the weibull distribution .calculations of shape parameter and detailed discussion on fitting has been exemplified in the section [ appendix ] . as connection probability or increases ,this phenomena of transition from the normal to the gev statistics for becomes more prominent .[ fig_stat_n100_avg12 ] , plotted for and , repeats the normal distribution behavior for , which corresponds to a symmetric random matrix with entries and . till , statistics more or less conforms to the normal distribution . at ,the statistics deviates from the normal distribution , with fitting accuracy being higher for the gev . with a further increase in the value of ,there is a transition to the gev statistics as illustrated by fig .[ fig_stat_n100_avg12 ] at .this behavior of the continues thereafter .as increases further , keeps emulating the normal distribution at and the gev statistics at . at intermediate values, it manifests different behaviors than demonstrated for lower connection probabilities as described by figs .[ fig_stat_n100_avg6 ] and [ fig_stat_n100_avg12 ] .as soon as increases from value , the statistics starts deviating from the normal distribution , and for intermediate values , for example at and in fig .[ fig_stat_n100_avg20 ] , it neither fits with the normal nor with the gev statistics . as value of increases , statistics indicates a closer fitting with the gev , and more deviation from the normal at as implied from fig .[ fig_stat_n100_avg20](d)-(e ) .further increase in prompts a good fitting with the gev statistics at , and this good fitting persists thereafter .detailed discussion on true gev statistics is provided in the section [ appendix ] .aforementioned behavior indicates that smaller values induce a smooth transition from the normal to the gev statistics , and for almost all values of the largest eigenvalue statistics remains close to either one of them . whereas larger values construe a rich behavior of .it ensues the normal distribution till certain range of and after that manifests deviation from it displaying a rapid change in the statistics as is incremented . for this intermediate range statistics deviates from the normal as well as the gev substantially .as increases further , the statistics fits better for the gev as compared to the normal , finally elucidating a legitimate fitting with the gev distribution at being 0.5 . and elucidating nature of gev statistics based on the value of shape parameter , and tail of the distribution at .,height=132 ]at these values where statistic fits well with the gev , the parameter , in the tables of section [ appendix ] , reveals that indeed the distribution complies one of the three different statistics , viz .gumbel , weibull and frchet , depending upon .for small , corresponding to sparser networks , the gev statistics espoused weibull distribution , whereas with an increase in connection probability it indicates a transition to frchet distribution through gumbel .phase diagram fig .[ phase ] illustrates this behavior for various values of and . for a definite shape parameterrange the weibull and the normal states have a close resemblance , the statistics in the intermediate regions of consequently emulating to one of them . whereas ,gumbel and frchet are much deviated from the normal , hence in the transition from the normal to the gumbel or frchet , may not abide by any of the statistics , and explains a scabrous behavior of in the intermediate region . at different values of i coupling probability .the histograms are numerical results , solid and dashed lines correspond to normal and gev fit respectively . for each case ,size of the network is and connection probability leading to average degree .( a ) , ( b ) , ( c ) , ( d ) , ( e ) , ( f ) , ( g ) and ( h ) correspond to being 0 , 0.1 , 0.3 , 0.4 , 0.42 , 0.46 , 0.48 and 0.50 respectively ., height=207 ] for the larger values , does not apprise gev statistics even at , fig. [ fig_stat_n100_avg50 ] and the value of reflect a frchet behavior although ks test rejects it . in order to understand such an impact of denseness on behavior, we investigate tail behavior of the parent distribution , and fig .[ fig_tail](c ) reveals that it is deviated from a power - law behavior for larger values , manifesting a deviation from gev distribution , whereas tail behaviors corresponding to = 12 and = 20 imitates exponential and power law decay , respectively as indicated by fig .[ fig_tail ] ( a ) and ( b ) , reinforcing gumbel and frchet distribution respectively for their maxima .higher values for which spectra do not exhibit gev even for , may be ascribed to the correlation in spectra arising from and entries competing with each other .[ fig_tail ] indicates existence of two different scales for , providing a plausible explanation of deviation from gev . and .circles represent data points , and solid line represents fitting with a straight line .figures are plotted for three different average degrees ( a ) , ( b ) and ( c ) . ]furthermore , revelation of the transition from weibull to frchet as a function of connection probability or average degree of the network , adds networks to the list of wide physical systems exhibiting this transition .for example , extreme intensity statistics in relation to complex random states manifest the weibull distribution in case of minimum intensity and the gumbel distribution for maximum intensity . for mass transport models distribution of largest mass displays the weibull , gumbel and frchet distribution depending upon critical density . . for non - interacting bosons ,level density follows one of these three distributions depending upon characteristic exponent of growth of underlying single particle spectrum .the interpretation of our result of transition from weibull to frchet in terms of the stability of underlying systems can be drawn as follows . for large number of inodes present in the network , the statistics of for denser networks are more right skewed and more deviated from a normal distribution as compared to the sparser networks , which indicates that higher values of are more probable for denser networks .this transpires that the probability with which a network ushers to an unstable system is more for denser networks than for the sparser ones .robert may , in his landmark paper , concluded that a randomly assembled web becomes less robust ( measured in terms of its dynamical stability ) as its connectivity increases .our results supports this interpretation for the networks having i and e couplings , which is not only based on the average mean behavior of largest eigenvalue but also based on its distribution modeled using the gev statistics .our model elucidates a profound impact of i - e ratio on both the mean and statistics of , hence indicating a probable impact on the stability or dynamical properties of corresponding system . to get insight into the transition from one statistics to another, first we discuss the importance of i - e couplings , followed by an analysis based on a measure capturing i - e couplings ratio .there exists plenty of behaviors and processes exhibited by neural systems which have been attributed to the ratio or balance between e and i inputs . in cortex ,inter - neurons responsible for inhibition play an important function in regulating activity of principal cells .when inhibition is blocked pharmacologically , cortical activity becomes epileptic , and neurons may lose their selectivity to different stimulus .these and other data indicate that an interplay between excitation and inhibition portrays a substantial role in determining the cortical computation . against for different values of average degree at with network size and sample size 20000 . panels ( a ) , ( b ) and ( c ) corresponds to k = 6 , 12 and 20 respectively . ] in order to understand the origin of two different statistics at and , we define a measure which captures an competition between i - i and e - e couplings with i - e couplings .the quantities and correspond to fraction of ( i - e ) and ( i - i ) + ( e - e ) couplings respectively . fig .[ finter ] plots against exhibiting a positive correlation between the two .presence of few scattered dots towards the rightmost top corner of the fig .[ finter](a ) for clearly reveals that underlying network has maximum -intra connections owing to high and .these figures indicate that the connections between neurons akin escort to more of an unstable system as compared to a balanced structure .moreover , in realistic neuronal network , connectivity is sparser between excitatory neurons than between other pairs , which correspond to the region lying towards the left of the fig .[ finter ] suggesting that networks with less intra - connections are more stable .the measure is bounded between two extreme structures : modular ( all i - i and/or e - e connections ) and bipartite ( all e - i or i - e connections ) ( fig . [ diagram ] ) .various realizations of the considered model may induce networks having ( i ) modular type structure ( fig . [ diagram]a ) , ( ii ) bipartite type of structure ( fig .[ diagram]c ) and ( iii ) intermediate structure lying in between these two ( fig .[ diagram]b ) .note that network structure remains same in all three cases , it is only the type of node ( i or e ) at two ends of a connection which decides the configurations mentioned above .an ideal bipartite structure would bring upon an anti - symmetric matrix consequently having all imaginary eigenvalues . though networks considered here do not consort to an ideal bipartite arrangement as depicted in fig .[ diagram](c ) , for high values of as elucidated in the fig .[ finter ] , it is expected to lie close to this arrangement which explains the origin of lower to the left of fig .[ finter ] .what follows is that larger i - e couplings drives to lower values of , which may be even for an ideal case of bipartite structural arrangement entailing a complete anti - symmetric matrix , whereas larger i - i or e - e couplings , which may be considered as modular type arrangement direct to higher values which may sometimes be unusually very large for certain network configurations , probably being one of the plausible reasons behind the origin of gev statistics .furthermore , the discussion elaborating fig .[ finter ] apparently sheds light on the origin of stability of network configurations having more inter - connections , in turn supporting bipartite type topology over a modular one as proposed in for real world network .to conclude , we have analyzed statistics of networks having i and e couplings .a linear decrease , followed by a non - linear one , in as a function of indicates that an increase in complexity , in terms of inclusion of i nodes , increases the stability of underlying system . for the range where mean ensues the linear dependence on , the statistics mostly yields a normal distribution , and after this critical value there is a transition to the gev statistics .the versatile situation arising from i - i , i - e competition , bringing upon gev statistics , has not been observed for zero value , and hence may be attributed to the rich behavior of in the presence of i nodes .though modeling real brain networks needs to account for more properties such as specific degree distribution , hierarchical structure etc , which may bring upon a richer largest eigenvalue pattern , an impact of i nodes impels a drastic change in its spectral properties illustrating extreme events which has been envisaged upon in this paper .asymmetric matrices considered here , motivated by brain networks , elucidate a different statistical property of than that of non - hermitian matrices motivated by ecological webs .moreover , the universal gev distribution displayed by largest eigenvalue of networks propagates theory of extreme value statistics , which suggests that a model which fits with all eigenvalues or describe fluctuations of all eigenvalues may not be a good model for the largest one .recent years have seen a fast development in merging of extreme statistics tools and random matrix theory .the present work extends this general perspective to complex networks . to our knowledge, this is the first work on networks demonstrating that the largest eigenvalue of a network , at particular i - e coupling ratio , can be modeled by the gev statistics .the transition of the statistics from one type to another as a function of i connections has crucial implications in predicting and analyzing network functions and behaviors in extreme situations .skd acknowledges ugc for financial support .sj thanks dst for funding .it is a pleasure to acknowledge dr .changsong zhou ( hongkong baptist university ) for useful discussions on brain networks at several occasions , and dr .a. lakshminarayan ( iitm ) for suggestions on gev statistics .we use kolmogorov - smirnov ( ks ) test to characterize hypothesized model of our data .the ks test is known to be superior to other techniques such as chi - square test for identifying a particular distribution .for example , in the context of networks , the said test has been performed to confirm power law for a given network data .the function kstest of matlab statistics toolbox is used to verify the acceptance of a given statistics at level of confidence if its corresponding p - value of ks test is greater than 0.05 .in some of the parameter regimes , gev distribution resembles the normal distribution owing to its shape parameter , which characterizes it as weibull distribution , and a particular distribution is confirmed using ks test .another example demonstrating the quality of our results can be exemplified with larger values , where for , though distribution looks more like frchet ( fig.4(d ) ) , ks test accepts frchet distribution at only .we perform ks test for sample size 5000 , which is large enough to approve a statistics .for example , accounts for 4000 sample size for performing ks test , and in , it is implemented to affirm goe and gse statistics for random matrices with sample size 1000 .it might be possible that for some network parameters , ks test accepts normal as well as weibull distributions , as depicted earlier by the fact that gev distribution in a certain shape parameter range resembles normal distribution . to address this issue, we increase the sample size from 5000 to 20000 for which ks test accepts either normal or weibull distribution .for example at for various values 0.0 , 0.3 , 0.4 , the sample size is increased to 20000 where only one distribution is accepted by the ks test .similarly for and and , the sample size is increased to 20000 for implementation of ks test ..estimated parameters of gev and normal distributions for for different inhibitory coupling probability ( ) . for each case ,size of network is and average degree = 6 .sample size is 5000 for all values except for entries for which sample size is 20000 . [cols="<,<,<,<,<,<,<,<",options="header " , ] for values ranging between 16 and 20 , the distribution lies close to frchet but not exactly frchet even for 20000 sample size , thus rendering ks test to reject it .this is supposedly the bottleneck of increasing sample size .we perform ks test for even a higher sample size ( 50000 ) , and it does not accept the frchet distribution ( even though distribution keeps lying close to the frchet ) , hence demonstrating fairness of our data and the technique adopted to conclude a particular distribution .+ we have also observed an effect of network size on the value of shape parameter .for example , networks size and yield a which characterizes weibull distribution , whereas for , size reflects frchet , and size reflects gumbel distribution . the phase diagram presented in fig. [ phase ] corresponds to , for which we get gev statistics till certain values . for the larger values when does not comply with gev statistics even at , fig .[ fig_stat_n100_avg50 ] and the value of in the table.4 suggest a frchet behavior however ks test rejects it . for numerical analysis ,functions in matlab statistics toolbox such as gevfit and gevpdf have been used .these functions compute maximum likelihood estimation for parameters of gev with confidence level .one of the earlier usage of this package includes numerical study of extreme value distribution in a discrete dynamical system with block maxima approach , where gevfit and gcdf functions have been used for robust estimation of its parameters . in this reference unnormalized data is directly fitted with gev and relation between normalized sequences as well as parameters of gev have been established .
|
inspired by the importance of inhibitory and excitatory couplings in the brain , we analyze the largest eigenvalue statistics of a random networks incorporating such features . we find that the largest real part of eigenvalues of a network , which accounts for the stability of underlying system , decreases linearly as a function of inhibitory connection probability up to a particular threshold value , after which it exhibits rich behaviors with the distribution manifesting generalized extreme value statistics . fluctuations in the largest eigenvalue remain somewhat robust against an increase in system size , but reflect a strong dependence on the number of connections indicating that systems having more interactions among its constituents are likely to be more unstable .
|
markov chain monte carlo ( mcmc ) methods are a remarkably robust way to sample from complex probability distributions .metropolis - hastings ( mh ) sampling stands out as an important benchmark .an appealing feature of mh sampling is the simple physical picture which underlies the general method . roughly speaking, the idea is that the thermal fluctuations of a particle moving in an energy landscape provides a conceptually elegant way to sample from a target distribution .but there are also potential drawbacks to mcmc methods . for example, the speed of convergence to the correct posterior is often unknown since a sampler can become trapped in a metastable equilibrium for a long period of time .once a sampler becomes trapped , a large free energy barrier can obstruct an accurate determination of the distribution . from this perspectiveit is therefore natural to ask whether further inspiration from physics can lead to new examples of samplers .now , although the physics of point particles underlies much of our modern understanding of natural phenomena , it has proven fruitful to consider objects such as strings and branes with finite extent in spatial dimensions ( a string being a case of a -brane ) .one of the main features of branes is that the number of spatial dimensions strongly affects how a localized perturbation propagates across its worldvolume .viewing a brane as a collective of point particles that interact with one another ( see fig .[ parstringbrane ] ) , this suggests applications to questions in statistical inference . motivated by these physical considerations , our aim in this work will be to study generalizations of the mh algorithm for such extended objects .for an ensemble of parallel mh samplers of a distribution , we can alternatively view this as a single particle sampling from variables with density : where the proposal kernel is simply : to realize mcmc with strings and branes , we keep the same target , but we change the proposal kernel by interpreting the index on as specifying the location of a statistical agent in a network . depending on the connectivity of this network ,an agent may interact with several neighboring agents , so we introduce a proposal kernel : in the above , the connectivity of the extended object specifies its overall topology .for example , in the case of a string , i.e. , a one - dimensional extended object , the neighbors of are , , and .[ parstringbrane ] depicts the time evolution of parallel mh samplers compared with the suburban sampler .however , there are potentially many consistent ways to connect together the inferences of statistical agents . from the perspective of physics , this amounts to a notion of distance / proximity between nearest neighbors in a brane .a physically well - motivated way to eliminate this arbitrary feature is to allow the notion of proximity itself to _ fluctuate _ , and for the brane to split and join .we view mcmc with extended strings and branes as a novel class of ensemble samplers . by correlating the inferences of nearest neighbors, we can expect there to be some impact on performance .for example , the degree of connectivity impacts the mixing rate for obtaining independent samples .another important feature is that because we are dealing with an extended object , different statistical agents may become localized in different high density regions .provided the connectivity with neighbors is sufficiently low , coupling these agents then has the potential to provide a more accurate global characterization of a target distribution .conversely , connecting too many agents together may cause the entire collective to suffer from groupthink in the sense of .in particular , we shall present evidence that the optimal connectivity for a network of agents on a grid arranged as a hypercubic lattice with some percolation ( i.e. , we allow for broken links ) occurs at a critical effective dimension : where is the average number of neighbors .to summarize : with too few friends one drifts into oblivion , but with too many friends one becomes a boring conformist . turning our discussion around, one can view this paper as providing a concrete way to study the physics of branes with a strongly fluctuating worldvolume , that is , the non - perturbative regime of string theory .the appendices provide some additional details ( see also ) .the suburban code is available at ` https://gitlab.com/suburban/suburban ` .one of the main ideas we shall develop in this paper is mcmc methods for extended objects . in an mcmcalgorithm we produce a sequence of timesteps which can be viewed as the motion of a point particle exploring a target space . more formally , this sequence of points defines the worldline for a particle , and consequently a map : for an extended object with spatial directions , we get a map from a worldvolume to the target : the special cases and respectively denote a point particle and string .the general physical intuition is that minus the log of the target distribution defines a potential energy , and minus the log of the markov chain transition probability is a kinetic energy .the key point is that statistical field theory in euclidean dimensions strongly depends on the number of dimensions .for example ,the two - point function for a free gaussian field with is : where in the case of , the two - point function is . for ,a random field explores its surroundings at large , but the overall variance decreases as . for , however , `` groupthink '' sets in and the ensemble less quickly explores its surroundings .this suggests a special role for stringlike objects . in a theory of quantum gravity( such as string theory ) it is also physically natural to let the proximity of nearest neighbors fluctuate .so , we introduce an ensemble of random graphs . for example , for a -dimensional toroidal hypercubic lattice ,introduce lattice sites along a spatial direction so that is the total number of agents . for a hypercubic lattice in dimensions ,we define the ensemble of random graphs for a brane as one in which we have a random shuffling of the agents , and in which a given link in a -dimensional hypercubic lattice is active with probability .we can also consider more general ensembles of adjacency matrices .for example , the erds - renyi ensemble has an edge between any two nodes with probability .we also introduce the notion of an effective dimension which depends on the average number of neighbors : which need not be an integer .we now present the suburban algorithm . for ease of exposition , we shall present the case of a 1d target . the generalization to a -dimensional target is straightforward , and we can take mh within a gibbs sampler , or a sampler with joint variables in which all dimensions update simultaneously . to avoid overloading the notation, we shall write for the current state of the grid . instead of directly sampling from , we introduce multiple copies of the target and sample from the joint distribution using mh sampling with proposal kernel , where denotes the adjacency matrix .if the system is in a state , with adjacency matrix , we pick a new state according to the mh update rule with a proposal kernel which depends on both these inputs .the mh acceptance probability is : this leads us to algorithm [ alg : suburban ] .randomly initialize and * for * * do * sample from accept with probability * if * accept = true * then * * else * draw from some of these steps can be parallelized whilst retaining detailed balance . for example we could pick a coloring of a graph and then perform an update for all nodes of a particular color whilst holding fixed the rest .we can also stochastically evolve the adjacency matrices .now , having collected a sequence of values , we can interpret this as samples of the original distribution .as standard for mcmc methods , we can then calculate quantities of interest such as the mean : as well as higher order moments .let us discuss the reason we expect our sampler to converge to the correct posterior distribution .first note that although we are modifying the proposal kernel at each time step ( i.e. , by introducing a different adjacency matrix ) , this modification is independent of the current state of the system .so , it can not impact the eventual posterior distribution we obtain .second , we observe that since we are just performing a specific kind of mh sampling routine for the distribution , we expect to converge to the correct posterior distribution .but , since the variables are all independent , this is tantamount to having also sampled multiple times from .the caveat is that we need the sampler to actually wander around during its random walk ; is typically necessary to prevent `` groupthink . '' to accommodate a flexible framework for prototyping , we have implemented the suburban algorithm in the probabilistic programming language ` dimple ` . for practical purposes we take a fairly large burn - in cut , discarding the first of samples from a run .we always perform gibbs sampling over the agents .for mh within gibbs sampling over a -dimensional target , we thus get a gibbs schedule with updates for each time step . for a joint sampler ,the gibbs schedule consists of just updates . the specific choice of for eqn .( [ stringkernel ] ) is motivated by having a free gaussian field on a fluctuating graph topology : with : for a neighbor of on the graph defined by the adjacency matrix .additionally , we set the hyperparameters for the kernel as : that is , we take an adaptive value for specified by the number of nearest neighbors joined to .this condition leads to a well - behaved continuum limit on a fully connected hypercubic lattice .in most cases , we consider mh within gibbs sampling , though we also consider the case where joint variables are sampled , that is , pure mh . rather than perform error analysis within a single long mcmc run, we opt to take multiple independent trials of each mcmc run in which we vary the hyperparameters of the sampler such as the overall topology and average degree of connectivity of the sampler . though this leads to less efficient statistical estimators , it has the virtue of allowing us to easily compare the performance of different algorithms , i.e. , as we vary the continuous and discrete hyperparameters of the suburban algorithm .we take to compare different grid topologies .we have also compared performance with parallel slice ( within gibbs ) samplers to ensure that our performance is comparable to other benchmarks . to gauge accuracy ,we collect the inferred mean and covariance matrix .we then compute the distance to the true values : we also collect performance metrics from the mcmc runs such as the rejection rate . a typical rule of thumbis that for targets with no large free energy barriers , a rejection rate of somewhere between is acceptable ( see e.g. , ) .we also collect the integrated auto - correlation time for the energy of the distribution : by collecting the values . for ,we evaluate : {c}\frac{1}{n}\underset{t=1}{\overset{n - k}{{\displaystyle\sum } } } \left ( v^{(t)}-\overline{v}\right ) \left ( v^{(t+k)}-\overline{v}\right ) \text { \ \ \ \ } k\geq0\\ \frac{1}{n}\underset{t=1}{\overset{n+k}{{\displaystyle\sum } } } \left ( v^{(t)}-\overline{v}\right ) \left ( v^{(t - k)}-\overline{v}\right ) \text { \ \ \ \ } k<0 \end{array } \right\ } , \ ] ] and then extract the integrated auto - correlation time : we also refer to this as the decay timeas it reflects how quickly the chain mixes .for this observable we include all samples ( no burn - in ) . to extract numerical estimates we perform independent trials with random initialization for each agent on ^{d} ] . compared with ,we take the parameters ` stdmu ` , ` stdsig ` .we do this primarily to achieve convergence for the samplers in a reasonable amount of time .the different mixture models are obtained by setting the random seed in the code of to different values ( see fig . [ rand40plot ] ) .rand40plot.pdf by design , we have chosen our domain for the random variables so that the brane tension should give a roughly comparable class of length scales for the target distribution .since the overall topology of the grid does not appear to affect the qualitative behavior of the sampler , we have also focussed on the case of a grid topology with shuffling and percolation . for each choice of hyperparameter , we perform trials .rand40b0p01mets.pdf the random seed model gives representative behavior .the stringlike sampler is faster and more accurate than the parallel mh sampler , while a sampler suffers from `` groupthink , '' settling in an incorrect metastable configuration .it is also of interest to consider distributions concentrated on a lower - dimensional subspace such as the two - dimensional banana distribution : this distribution is often used as a performance test of various optimization algorithms .we focus on a suburban sampler with joint variables , taking different grid topologies for the statistical agents and then perform a sweep over different values of the hyperparemeters and , performing trials for each case .time2dbananamets.pdf we present the representative case of a grid , and further specialize to the tuned case of . fig .[ time2dbananamets ] shows that parallel samplers ( ) and collectives with groupthink both fare worse than a stringlike sampler .the extended nature of the suburban sampler also suggests that for target distributions with various disconnected deep pockets , different pieces of the ensemble can wander over to different regions .we consider a mixture model with two gaussian components : with : where we vary and hold fixed .we take samples with agents on a grid with , performing independent trials .since we use mh within gibbs , we do not find much decrease in performance in comparing the and free energy barrier tests .[ 2dfreebarrierplotmet ] shows that parallel samplers fare worse than the extended objects .the runs are sometimes more accurate , but mix slower than for . after thinning samples, the former runs will be less accurate .jjh thanks d. krohn for collaboration at an early stage .we thank j.a .barandes , c. barber , c. freer , m. freytsis , j.j .heckman sr ., a. murugan , p. oreto , r. yang , and j. yedidia for helpful discussions .the work of jjh is supported by nsf career grant phy-1452037 .jjh also acknowledges support from the bahnson fund as well as the r. j. reynolds industries , inc .junior faculty development award at unc chapel hill .99 n. metropolis , a. w. rosenbluth , m. n. rosenbluth , a. h. teller , and e. teller , `` equation of state calculations by fast computing machines , '' _ j. chem . phys . _* 21 * ( 1953 ) 10871092 .w. k. hastings , `` monte carlo sampling methods using markov chains and their applications , '' _ biometrika _ * 57 * no . 1 ,( 1970 ) 97109 .j. j. heckman , `` statistical inference and string theory , '' _ int .* a30 * no . 26 , ( 2015 ) 1550160 , arxiv:1305.3621 [ hep - th ] . r. h. swendsen and j .- s. wang , `` replica monte carlo simulation of spin - glasses , '' _ phys ._ * 57 * no .21 , ( 1986 ) 26072609 .c. j. geyer , `` markov chain monte carlo maximum likelihood , '' in _ computing science and statistics : proceedings of the 23rd symposium on the interface _ ,e. m. keramidas , ed . , pp .interface foundation , 1991 .w. r. gilks , g. o. roberts , and e. i. george , `` adaptive direction sampling , '' _ journal of the royal statistical society .series d ( the statistician ) _ * 43 * no . 1 , ( 1997 ) 179189 .d. j. earl and m. w. deem , `` parallel tempering : theory , applications , and new perspectives , '' _ phys .phys . _ * 7 * ( 2005 ) 39103916 .r. m. neal , `` mcmc using ensembles of states for problems with fast and slow variables such as gaussian process regression , '' arxiv:1101.0387 [ stat ] .j. goodman and j. weare , `` ensemble samplers with affine invariance , '' _ comm . in appl .math . and comp .sci . _ * 5 * no . 1 , ( 2010 ) 6580 .j. j. heckman , j. g. bernstein , and b. vigoda , `` mcmc with strings and branes : the suburban algorithm ( extended version ) , '' arxiv:1605.05334 [ physics.comp-ph ] .s. hershey , j. bernstein , b. bradley , a. schweitzer , n. stein , t. weber , and b. vigoda , `` accelerating inference : towards a full language , compiler and hardware stack , '' arxiv:1212.2991 [ cs.se ] .r. a. neal , `` slice sampling , '' _ the ann . of stat . _* 31 * no . 3 , ( 2003 ) 705767 .a. gelman , w. r. gilks , and g. o. roberts , `` weak convergence and optimal scaling of random walk metropolis algorithms , '' _ ann .* 7 * no . 1 , ( 1997 ) 110120 .y. chen , m. welling , and a. j. smola , `` super - samples from kernel herding , '' arxiv:1203.3472 [ cs.lg ] .m. e. peskin and d. v. schroeder , _ an introduction to quantum field theory_. , reading , usa , 1995 .a. wipf , `` statistical approach to quantum field theory , '' _ lect .notes phys . _* 864 * , ( 2013 ) .v. balasubramanian , `` statistical inference , occam s razor and statistical mechanics on the space of probability distributions , '' _ neural comp . _ * 9(2 ) * ( 1997 ) 349368 , arxiv : cond - mat/9601030 .p. di francesco , p. mathieu , and d. senechal , _ conformal field theory_. , new york , usa , 1997 .in this appendix we give a path integral formulation for mcmc with extended objects . for additional background on path integrals in statistical field theory , see . in what follows, we denote the random variable as with outcome on a target space with measure .we consider sampling from a probability density . in accord with physical intuition, we view as a potential energy . in general ,our aim is to discover the structure of by using some sampling algorithm to produce a sequence of values .a quantity of interest is the expected value of with respect to a given probability distribution of paths .this helps in telling us the relative speed of convergence and the mixing rate . to study this, it is helpful to evaluate the expectation value of the quantity: with respect to a given path generated by our sampler . in more general terms, the reason to be interested in this expectation value comes from the statistical mechanical interpretation of statistical inference : there is a competition between staying in high likelihood regions ( minimizing the potential ) , and exploring more of the distribution ( maximizing entropy ) .the tradeoff between the two is neatly captured by the path integral formalism : it tells us about a particle moving in a potential , and subject to a thermal background , as specified by the choice of probability measure over possible paths .indeed , we will view this probability measure as defining a `` kinetic energy '' in the sense that at each time step , we apply a random kick to the trajectory of the particle , as dictated by its contact with a thermal reservoir . along these lines, if we have an mcmc sampler with transition probabilities , marginalizing over the intermediate values yields the expected value of line ( [ productpot ] ) : \left ( \underset{i=0}{\overset{n-1}{{\displaystyle\prod } } } t(x^{(i)}\rightarrow x^{(i+1)})e^{-v(x^{(i+1)})}\right)\ ] ] where we have introduced the measure factor = dx^{(1 ) } ... dx^{(n)}$ ] .we would like to interpret as the potential energy and as a kinetic energy : we now observe that our expectation value has the form of a well - known object in physics : e^{- \underset{t}{\sum } l^{(e)}[x^{(t)}]},\ ] ] a path integral ! herethe euclidean signature lagrangian is : =k+v.\text { } \ ] ] since we shall also be taking the number of timesteps to be very large , we make the riemann sum approximation and introduce the rescaled lagrangian density: so that we can write our process as : e^{-\int dt\mathcal{l}^{(e)}[x(t ) ] } , \ ] ] where by abuse of notation , we use the same variable to reference both the discretized timestep as well as its continuum counterpart . to give further justification for this terminology , consider now the specific case of the metropolis - hastings algorithm . in this case, we have a proposal kernel , and acceptance probability: the total transmission probability is then given by a sum of two terms .one is given by , i.e. , we accept the new sample .we also sometimes reject the sample , i.e. , we keep the same value as before: where is the dirac delta function , and we have introduced an averaged rejection rate : to gain further insight , we now approximate the mixture model by a normal distribution such that . hence , , matching the first and second moments to requires , with the average acceptance rate . ]\simeq\alpha_{\text{eff}}\left ( x^{(t+1)}-x^{(t)}\right ) ^{2}+v(x^{(t)})+ ... ,\ ] ] where here , the ... denotes additional correction terms which are typically suppressed by powers of . our plan will be to assume a kinetic term with quadratic time derivatives , but a general potential .the overall strength of the kinetic term will depend on details such as the average acceptance rate . as the acceptance rate decreases , increases and the sampled values all concentrate together .we now turn to the generalization of the above concepts for strings and branes , i.e. , extended objects .introduce copies of the original distribution , and consider the related joint distribution: if we keep the proposal kernel unchanged , we can simply describe the evolution of independent point particles exploring an enlarged target space: if we also view the individual statistical agents on the worldvolume as indistinguishable , we can also consider quotienting by the symmetric group on letters , : of course , we are also free to consider a more general proposal kernel in which we correlate these values . viewed in this way , an extended object is a single point particle , but on an enlarged target space .the precise way in which we correlate entries across a grid will in turn dictate the type of extended object . indeed , much of the path integral formalism carries over unchanged .the only difference is that now , we must also keep track of the spatial extent of our object .so , we again introduce a potential energy and a kinetic energy : and a euclidean signature lagrangian density : =k+v,\ ] ] where here , indexes locations on the extended object , and the subscript makes implicit reference to the adjacency on the graph . in a similar notation , the expected value is now : \text { } e^{-\underset{t}{\sum}\underset{\sigma}{\sum } l^{(e)}[x(t,\sigma_{a})]}.\ ] ] since we shall also be taking the number of time steps and agents to be large , we again make the riemann sum approximation : that : e^ { -\int dtd\sigma_{a}\text { } \mathcal{l}^{(e)}[x(t,\sigma_{a } ) ] } , \ ] ] in the obvious notation .so far , we have held fixed a particular adjacency matrix .this is somewhat arbitrary , and physical considerations suggest a natural generalization where we sum over a statistical ensemble of choices .one can loosely refer to this splitting and joining of connectivity as incorporating gravity into the dynamics of the extended object , because it can change the notion of which statistical agents are nearest neighbors .along these lines , we incorporate an ensemble of possible adjacency matrices , with some prescribed probability to draw a given adjacency matrix .the topology of an extended object dictates a choice of statistical ensemble .since we evolve forward in discretized time steps , we can in principle have a sequence of such matrices , one for each timestep . for each draw of an adjacency matrix , the notion of nearest neighbor will change , which we denote by writing , that is , we make implicit reference to the connectivity of nearest neighbors . marginalizing over the choice of adjacency matrix , we get:[da]\text { } e^{-\underset{t}{\sum}\underset{\sigma } { \sum}l^{(e)}[x(t,\sigma_{a(t)})]},\ ] ] where now the integral involves summing over multiple ensembles : the spatial and temporal values with measure factor , as well as the choice of a random matrix from the ensemble ( one such integral for each timestep ) . at a very general level , one can view the adjacency matrix as adding additional auxiliary random variables to the process .so in this sense , it is simply part of the definition of the proposal kernel .following some of the general considerations outlined in reference , we now discuss the extent to which the extended nature of such objects plays a role in statistical inference and in particular mcmc .to keep our discussion from becoming overly general , we specialize to the case of a hypercubic lattice of agents in spatial dimensions arranged on a torus , and we denote a location on the grid by a -component vector .we can allow for the possibility of a fluctuating worldvolume by making the crude substitution .consider the gaussian proposal kernel of line ( [ colonelklink ] ) . in a large lattice ,we approximate the finite differences in one of the spatial directions by derivatives of continuous functions . expanding in this limit ,various cross - terms cancel and we get for the proposal kernel : where denotes a finite difference in the spatial component of the -dimensional lattice . just as in the case of the point particle , the transition rate defines a kinetic energy quadratic in derivatives ( to leading order ) , with an effective strength dictated by the overall acceptance rate .one of the things we would most like to understand is the extent to which an extended object can explore the hills and valleys of .we perform a perturbative analysis , at first viewing as a small correction to the lagrangian . starting from some fixed position ,consider the expansion of around this point: each of the derivatives of reveals another characteristic feature length of .these feature lengths are specified by the values of the moments for the distribution . when , there is a well - known behavior of correlation functions which is given by eqn .( [ twopoint ] ) . in dimensions exhibits the requisite power law behavior .] there is thus a rather sharp change in the inferential powers of an extended object above and below . to understand the impact of a non - trivial potential , we introduce the notion of a `` scaling dimension '' for and its derivatives .this is a well - known notion , see for a review .just as we assign a notion of proximity in space and time to agents on a grid , we can also ask how rescaling all distances on the grid via : impacts the structure of our continuum theory lagrangian .the key point is that provided and have been taken sufficiently large , or alternatively we take sufficiently large , we do not expect there to be any impact on the physical interpretation .unpacking this statement naturally leads us to the notion of a scaling dimension for itself . observe that rescaling the number of samples and number of agents in line ( [ rescaler ] ) can be interpreted equivalently as holding fixed and , but rescaling and : now , for our kinetic term to remain invariant , we need to _ also _ rescale : the exponent is often referred to as the `` scaling dimension '' for obtained from `` naive dimensional analysis '' or nda .it is `` naive '' in the sense that when the potential and we have strong coupling , the notion of a scaling dimension may only emerge at sufficiently long distance scales .note that because we are uniformly rescaling the spatial and temporal pieces of the grid , we get the same answer for the scaling dimension if we consider spatial derivatives along the grid .this assumption can also be relaxed in more general physical systems .to illustrate , invariance of the free field action requires : we can also consider the behavior of a perturbation of the form . applying our nda analysis prescription , we see that under a rescaling , the contribution such a term makes to the action is : so terms of the form for die off as we take , i.e. , .additionally , we see that when , we can in principle expect more general contributions of the form . for additional discussion on the interpretation of such contributions ,see reference .consider next possible perturbations to the potential energy .each successive interaction term in the potential is of the form , with scaling dimension .so , for , all higher order terms can impact the long distance behavior of the correlation functions , while for , the most relevant term is bounded above , and the global structure of the potential will be missed .rand40b0p01tails.pdf time2dbananatails.pdfin figs . [ rand40b0p01tails ] and [ time2dbananatails ] we display some tests of how well a sampler collects `` rare events , '' i.e. , tail statistics . after taking a burn - in cut with remaining samples ,we compute the number of counts in the region , the region , the region , and events which fall outside the region . for each such region , we compute the difference between the inferred and true counts and return the fraction : landrand40slicevsmh.pdf 2dfreebarrierslicevssub.pdffigs . [ landrand40slicevsmh ] and [ 2dfreebarrierplotmetnew ]compare the performance of a suburban sampler with grid topology ( with and ) with parallel slice within gibbs sampling .we use the default implementation in ` dimple ` so that for a 1d target distribution the initial size of the -axis width is an interval of length one containing , and the maximum number of doublings is .a direct comparison with suburban is subtle because in slice sampling the halting of the `` stepping out '' and `` stepping in '' loops is not fixed ahead of time . in practicewe find that for a fixed number of samples , slice typically makes several more queries to the target distribution compared with suburban , roughly a factor of . for large free energy barriers ,it is also sometimes helpful to enlarge the initialization width from to ( see fig . [ 2dfreebarrierplotmetnew ] ) .
|
motivated by the physics of strings and branes , we introduce a general suite of markov chain monte carlo ( mcmc ) `` suburban samplers '' ( i.e. , spread out metropolis ) . the suburban algorithm involves an ensemble of statistical agents connected together by a random network . performance of the collective in reaching a fast and accurate inference depends primarily on the average number of nearest neighbor connections . increasing the average number of neighbors above zero initially leads to an increase in performance , though there is a critical connectivity with effective dimension , above which `` groupthink '' takes over , and the performance of the sampler declines .
|
in kernel machines such as support vector machines ( svm ) , objects are represented as a kernel matrix , where objects are represented as an positive semidefinite matrix .essentially the entry of the kernel matrix describes the similarity between -th and -th objects .due to positive semidefiniteness , the objects can be embedded as points in an euclidean feature space such that the inner product between two points equals to the corresponding entry of kernel matrix .this property enables us to apply diverse learning methods ( for example , svm or kernel pca ) without explicitly constructing a feature space .biological data such as amino acid sequences , gene expression arrays and phylogenetic profiles are derived from expensive experiments . typically initial experimental measurements are so noisy that they can not be given to learning machines directly .since high quality data are created by extensive work of human experts , it is often the case that good data are available only for a subset of samples .when a kernel matrix is derived from such incomplete data , we have to leave the entries for unavailable samples as _ missing_. we call such a matrix an `` incomplete matrix '' . our aim is to estimate the missing entries , but it is obviously impossible without additional information .so we make use of a _ parametric model _ of admissible matrices , and estimate missing entries by fitting the model to existing entries . in this scheme , it is important to define a parametric model appropriately .for example , used the set of all positive definite matrices as a model .although this model worked well when only a few entries are missing , this model is too general for our cases where whole columns and rows are missing .thus we need another information source for constructing a parametric model .fortunately , in biological data , it is common that one object is described by two or more representations .for example , genes are represented by gene networks and gene expression arrays at the same time . also a bacterium is represented by several marker sequences . in this paper , we assume that a complete matrix is available from another information source , and a parametric model is created by giving perturbations to the matrix .we call the complete matrix a `` base matrix '' .when creating a parametic model of admissible matrices from a base matrix , one typical way is to define the parametric model as all _ spectral variants _ of the base matrix , which have the same eigenvectors but different eigenvalues .when several base matrices are available , the weighted sum of these matrices would be a good parametric model as well . in order to fit a parametric model , the distance between two matrices has to be determined .a common way is to define the euclidean distance between matrices ( for example , the frobeneous norm ) and make use of the euclidean geometry .recently tackled with the incomplete matrix approximation problem by means of kernel cca .also proposed a similarity measure called `` alignment '' , which is basically the cosine between two matrices .in contrast that their methods are based on the euclidean geometry , this paper will follow an alternative way : we will define the kullback - leibler ( kl ) divergence between two kernel matrices and make use of the riemannian information geometry .the kl divergence is derived by relating a kernel matrix to a covariance matrix of gaussian distribution .the primal advantage is that the kl divergence allows us to use the algorithm to approximate an incomplete kernel matrix .the and steps are formulated as convex programming problems , and moreover they can be solved analytically when spectral variants are used as a parametric model .we performed bacteria clustering experiments using two marker sequences : 16s and gyrb .we derived the incomplete and base kernel matrices from gyrb and 16s , respectively . as a result , even when 50% of columns / rows are missing, the clustering performance of the completed matrix was better than that of the base matrix , which illustrates the effectiveness of our approach in real world problems .this paper is organized as follows : sec . [sec : ig ] introduces the information geometry to the space of positive definite matrices . based on geometric concepts , the _ em _ algorithm for matrix approximation is presented in sec .[ sec : em ] , where detailed computations are deferred in sec .[ sec : compproj ] . in sec .[ sec : em ] , the matrix approximation problem is formulated as statistical inference and the equivalence between the _ em _ and em algorithms is shown .then the bacteria clustering experiment is described in sec .[ sec : experiment ] . after seeking for possible extensions in sec .[ sec : ext ] , we conclude the paper in sec . [sec : con ] .we first explain how to introduce the information geometry in the space of positive definite matrices. only necessary parts of the theory will be presented here , so refer to for details .let us define the set of all positive definite matrices as .the first step is to relate a positive definite matrix to the gaussian distribution with mean 0 and covariance matrix : it is well known that the gaussian distribution belongs to the exponential family .the canonical form of an exponential family distribution is written as where is the vector of sufficient statistics , is the natural parameter and is the normalization factor .when ( [ eq : gaussian ] ) is rewritten in the canonical form , we have the sufficient statistics as and the natural parameter as {11 } , \ldots , [ p^{-1}]_{dd } , [ p^{-1}]_{12 } , \ldots , [ p^{-1}]_{d-1,d } \right)^\top,\ ] ] where {ij} ] and $ ] . our purpose is to obtain the maximum likelihood estimate of parameter of the following gaussian model : ^\top m^{-1 } \left [ \begin{array}{c } { { \boldsymbol{v}}}\\ { { \boldsymbol{h}}}\end{array } \right ] \right),\ ] ] where is described as ( [ eq : mdef ] ) . in the course of maximum likelihood estimation, we have to estimate the observed covariances and in an appropriate way .the em algorithm consists of the following two steps . *e - step : fix and update and by conditional expectation . * m - step : fix and update by maximum likelihood estimation .it is shown that the likelihood of observed data increases monotonically by repeating these two steps .the m - step maximizes the likelihood , which is easily seen to be equivalent to minimizing the kl divergence .so the -step is equivalent to the -step ( [ eq : mopt ] ) .however , the equivalence between e - step and -step is not obvious , because the former is based on conditional expectation and the latter minimizes the kl divergence . in the e - step ,the covariance matrices are computed from the conditional distribution described as where matrices are derived as ( [ eq : mdivide ] ) . taking expectation with this distribution , we have & = & -{{\boldsymbol{v}}}{{\boldsymbol{v}}}^\top s_{vh } s_{hh}^{-1 } , \\ e_{{\boldsymbol{b}}}[{{\boldsymbol{h}}}{{\boldsymbol{h}}}^\top\mid{{\boldsymbol{v } } } ] & = & s_{hh}^{-1 } + s_{hh}^{-1}s_{vh}^\top{{\boldsymbol{v}}}{{\boldsymbol{v}}}^\top s_{vh } s_{hh}^{-1}.\end{aligned}\ ] ] then the covariance matrices are estimated as & = & - k_i s_{vh } s_{hh}^{-1 } , \\d_{hh } = e_o e_{{\boldsymbol{b}}}[{{\boldsymbol{h}}}{{\boldsymbol{h}}}^\top\mid{{\boldsymbol{v } } } ] & = & s_{hh}^{-1 } + s_{hh}^{-1}s_{vh}^\top k_i s_{vh } s_{hh}^{-1}.\end{aligned}\ ] ] since these solutions are equivalent to ( [ eq : dvhlast ] ) and ( [ eq : dhhlast ] ) , respectively , the e - step is shown to be equivalent to the -step in this case .refer to for general discussion of the equivalence between em and algorithms .in this section , we perform unsupervised classification experiments for bacteria based on two marker sequences : 16s and gyrb .basically we would like to identify the genus of a bacterium by means of extracted entities from the cell .it is known that several specific proteins and rnas can be used for genus identification . among them , we especially focus on 16s rrna and gyrase subunit b ( gyrb ) protein .16s rrna is an essential constituent in all living organisms , and the existence of many conserved regions in the rrna genes allows the alignment of their sequences derived from distantly related organisms , while their variable regions are useful for the distinction of closely related organisms .gyrb is a type ii dna topoisomerase which is an enzyme that controls and modifies the topological states of dna supercoils .this protein is known to be well preserved over evolutional history among bacterial organisms thus is supposed to be a better identifier than the traditional 16s rrna .notice that 16s is represented as a nucleotide sequence with 4 symbols , and gyrb is an amino acid sequence with 20 symbols . since gyrb has been found to be useful more recently than 16s , gyrb sequences are available only for a limited number of bacteria .thus , it is considered that gyrb is more `` expensive '' than 16s .our dataset has 52 bacteria of three genera ( _ corynebacterium _ : 10 , _ mycobacterium _ : 31 , _ rhodococcus _ : 11 ) , each of which has both 16s and gyrb sequences . for simplicity ,let us call these genera as class 1 - 3 , respectively . for 16s and gyrb , we computed the second order count kernel , which is the dot product of bimer counts .each kernel matrix is normalized such that the norm of each sample in the feature space becomes one .the kernel matrices of gyrb and 16s can be seen in fig .[ fig : kermat ] ( b ) and ( c ) , respectively . for reference, we show an ideal matrix as fig .[ fig : kermat](a ) , which indicates the true classes . in our senario , for a considerble number of bacteria , gyrb sequences are not available as in fig .[ fig : kermat](d ). we will complete the missing entries by the _algorithm with the spectral variants of 16s matrix .when the _ em _ algorithm converges , we end up with two matrices : the _ completed matrix _ on data manifold ( fig .[ fig : kermat](e ) ) and the _ estimated matrix _ on model manifold ( fig .[ fig : kermat](f ) ) .these two matrices are in general not the same , because the two manifolds may not have intersection . in order to evaluate the quality of completed and estimated matrices ,k - means clustering is performed in the feature space of each kernel . in evaluating the partition, we use the adjusted rand index ( ari ) .let be the obtained clusters and be the ground truth clusters .let be the number of samples which belongs to both and .also let and be the number of samples in and , respectively .ari is defined as / { \begin{pmatrix}}n \\ 2 { \end{pmatrix } } } { \frac{1}{2}\left [ \sum_i { \begin{pmatrix}}n_{i . }\\ 2 { \end{pmatrix}}+ \sum_j { \begin{pmatrix}}n_{.j } \\ 2 { \end{pmatrix}}\right ] - \left [ \sum_i { \begin{pmatrix}}n_{i . }\\ 2 { \end{pmatrix}}\sum_j { \begin{pmatrix}}n_{.j } \\ 2 { \end{pmatrix}}\right ] / { \begin{pmatrix}}n \\ 2 { \end{pmatrix}}}.\ ] ] the attractive point of ari is that it can measure the difference of two partitions even when the number of clusters is different . when the two partitions are exactly the same , ari is 1 , and the expected value of ari over random partitions is 0 ( see for details ) .the clustering experiment is performed by randomly removing samples from gyrb data .the ratio of missing samples is changed from 0% to 90% .the aris of completed and estimated matrices averaged over 20 trials are shown in fig .[ fig : ari_completed ] and [ fig : ari_estimated ] , respectively . comparing the two matrices ,the estimated matrix performed significantly worse than the complete matrix .it is because the completed matrix maintains existing entries unchanged , and so the class information in gyrb matrix is well preserved .we especially focus on the comparison between the completed matrix and 16s matrix , because there is no point in performing the _ em _ algorithm when 16s matrix works better than the completed matrix . according to the plot , the ari of completed matrix was larger than 16s matrix up to 50% missing ratio .it implies that the matrix completion is meaningful even in quite hard situations 50% sample loss implies 75% loss in entries .this result encourages us ( and hopefully readers ) to apply the _ em _ algorithm to other data such as gene networks . [ cols="^,^ " , ]as we related the _ em _ algorithm to maximum likelihood inference in sec .[ sec : em ] , it is straightforward to generalize it to the maximum _ a posteriori _ ( map ) inference or more generally the bayes inference .for example , we are going to modify the _algorithm to obtain the map estimate .the map estimation amounts to minimizing the kl divergence penalized by a prior , where is a prior distribution for . since the additional term depends only on the model , only the -stepis changed so as to minimize the above objective function with respect to .let us give a simple example of map estimation in the spectral variants case .in bayesian inference , it is common to take a _conjugate prior _ , so that the posterior distribution remains as a member of the exponential family . since the model parameter is related to a covariance matrix , we choose the gamma distribution , which works as a conjugate prior for the variance of gaussian distribution .the prior distribution is defined independently for each as where and denote hyperparameters , by which the mean and the variance are specified by and .the step for map estimation is to minimize which leads to the equation in the spectral variants case , the left hand side is reduced to , thus we obtain the map solution in a closed form as this paper , we introduced the information geometry in the space of kernel matrices , and applied the _ em _ algorithm in matrix approximation . the main difference to other euclidean methods is that we use the kl divergence .in general , we can not determine which distance is better , because it is highly data dependent .however our method has a great utility , because it can be implemented only with algebraic computation and we do not need any specialized optimizer such as semidefinite programmming unlike .one of our contribution is that we related matrix approximation to statistical inference in sec .[ sec : em ] .thus , in future works , it would be interesting to involve advanced methods in statistical inference , such as generalized em and variational bayes .also we are looking forward to apply our method to diverse kinds of real data which are not limited to bioinformatics .in this appendix , we discuss the solvability of the -step . the left hand side of ( [ eq : mstep ] ) is the -coordinate of the submanifold , while denote the -coordinate of .the -coordinate and -coordinate are connected by the legendre transform . in the mother manifold , the legendre transform is easily obtained as the inverse of the matrix . in the submanifold of , however , it is difficult to obtain the legendre transform in general .the difficulty is caused by the difference of geodesics defined in and .when the geodesic defined by a coordinate system of a submanifold coincides the geodesic defined by the corresponding global coordinate system of , the submanifold is called _ autoparallel_. in our case , is autoparallel for the -coordinate , but it is not always autoparallel for the -coordinate . when the submanifold is autoparallel for the both coordinate systems , the submanifold is called doubly autoparallel . let us consider when a submanifold becomes doubly autoparallel . to begin with , let us define the product between two symmetric matrices , the algebra equipped with the usual matrix sum and the product ( [ eq : prod ] ) is called the jordan algebra of the vector space of _ sym_ .the following theorem provides the necessary and sufficient condition for doubly autoparallel submanifold .assume the identity matrix is an element of the submanifold , then is doubly autoparallel if and only if the tangent space of is a jordan subalgebra of _ sym_ .when a submanifold is determined as ( [ eq : generalm ] ) , is doubly autoparallel if the following holds for all : has shown that , if and only if is doubly autoparallel , the -projection can be solved analytically , that is , the optimal solution is obtained by one newton step .for example , in the spectral variants case , and thus the -projection is obtained analytically in this case .the authors gratefully acknowledge that the bacterial _ gyrb _ amino acid sequences are offered by courtesy of identification and classification of bacteria ( icb ) database team of marine biotechnology institute , kamaishi , japan .the authors would like to thank t. kin , y. nishimori , t. tsuchiya and j .- p .vert for fruitful discussions .h. attias .inferring parameters and structure of latent variable models by variational bayes . in _ uncertainty in artificial intelligence : proceedings of the fifteenth conference ( uai-1999 ) _ , pages 2130 , san francisco , ca , 1999 .morgan kaufmann publishers .n. cristianini , j. shawe - taylor , j. kandola , and a. elisseeff . on kernel - target alignment . in t.g .dietterich , s. becker , and z. ghahramani , editors , _ advances in neural information processing systems 14_. mit press , 2002 .s. ikeda , s. amari , and h. nakahara .convergence of the wake - sleep algorithm . in m.s .kearns , s.a .solla , and d.a .cohn , editors , _ advances in neural information processing systems 11 _ , pages 239245 . mit press , 1999 .h. kasai , a. bairoch , k. watanabe , k. isono , s. harayama , e. gasteiger , and s. yamamoto .construction of the gyrb database for the identification and classification of bacteria . in _genome informatics 1998 _ , pages 1321 .universal academic press , 1998 .a. ohara .information geometric analysis of an interior point method for semidefinite programming . in o.e .barndorff - nielsen and e.b .vedel jensen , editors , _ geometry in present day science _ , pages 4974 .world scientific , 1999 .s. yamamoto , h. kasai , d.l .arnold , r.w .jackson , a. vivian , and s. harayama .phylogeny of the genus pseudomonas : intrageneric structure reconstructed from the nucleotide sequences of gyrb and rpod genes . , 146:0 23852394 , 2000 .
|
in biological data , it is often the case that observed data are available only for a subset of samples . when a kernel matrix is derived from such data , we have to leave the entries for unavailable samples as missing . in this paper , we make use of a parametric model of kernel matrices , and estimate missing entries by fitting the model to existing entries . the parametric model is created as a set of spectral variants of a complete kernel matrix derived from another information source . for model fitting , we adopt the _ em _ algorithm based on the information geometry of positive definite matrices . we will report promising results on bacteria clustering experiments using two marker sequences : 16s and gyrb .
|
one of the unresolved problems in cloud physics is drizzle formation in a warm ( ice free ) clouds .early modelling studies using lagrangian parcel models ( ) , ( ) demonstrated that condensational growth leads to a very narrow droplet spectrum , contrary to the observations , where the droplet spectrum inside the cloud was relatively broad e.g. ( ) .a narrow droplet spectrum makes collision between droplets inefficient and as a result it takes a long time to switch from a condensational to a coagulational droplet growths .observations show , that development of precipitation in the clouds may be rapid e.g. ( ) which could nt be explained by the modelling results .several mechanisms to explain this discrepancy between numerical model and observations were proposed and were reviewed by ( ) .undoubtedly representation of the cloud formation process by a parcel model is an approximation .nevertheless attempts were made over the time to explain the width of the cloud droplet spectrum in clouds using this approach . ( ) showed that a much broader droplet spectrum can be obtained when instead of one vertical velocity , a velocity distribution is used .another approach was utilized by ( ) and ( ) , who used a velocity field from a bulk model to derive trajectories for the parcel models .for both these approaches droplet spectra broader than predicted by a single parcel were reported , however , in these approaches the microphysics is separated from dynamics and thermodynamics of the eulerian model .the use of the bin models ( ) , ( ) , ( ) , ( ) , ( ) , ( ) , ( ) , ( ) , ( ) , where droplet spectrum is represented as a continuous function , made the problem of the transition from condensational to coagulational disappear .even very high resolution ( in bin sizes ) parcel models with bin microphysics are capable of producing precipitation ( ) , ( ) independent of whether turbulent enhancement is or is not taken into account .it s not clear , however , whether the rain forms in these models due to physical processes or numerical errors associated with the numerical solution of the condensational growth and collision or the physics itself , since detailed comparisons between eulerian ( bin ) and lagrangian models were never to our knowledge reported . in the recent yearsa new approach to microphysics formulated in lagrangian framework was proposed for both warm rain ( , ) , ( ) , ( ) and ice clouds ( ) . in thisapproach dynamics and thermodynamics are represented in the traditional eulerian framework , whilst microphysics is represented in a lagrangian framework with two way interactions between eulerian and lagrangian parts .lagrangian microphysics tracks lagrangian parcels ( sometimes referred as super - droplets ( ) ) , each representing a number of real aerosol , having the same chemical and physical properties .depending on the conditions determined by the eulerian model , water can condense on or evaporate from the aerosol surface .resulting forces , together with a drag force are return to the eulerian model .transport of physical properties by lagrangian parcels overcomes many problems present in eulerian models .the lagrangian representation of microphysics is diffusion free ; each parcel can be treated individually , which makes representation of the sub - grid scale variability easier ; the edge of the cloud is resolved without the need of the use of a special techniques ( for instance vof discussed in ( ) , ( ) ) . at the same time representation of the field with a limited number of parcelsmay lead to random fluctuations in derived fields ( for instance concentration of droplets and aerosol cloud water ) ( ) , ( ) ; which in an extreme cases may lead to parcel free computational grids .this article focuses on the representation of the turbulence in a numerical model with a lagrangian representation of microphysics and its effect on drizzle formation .the effect of air turbulence on drizzle formation is a complex and not fully understood problem in cloud physics .turbulence can affect directly relative velocities of the colliding droplets , leading to a coalescence of droplets which would not collide in a laminar flow .it can also indirectly influence drizzle formation , by modification of the environment in which droplets grow / evaporate .a broad review of the effect of turbulence on clouds and its importance were discussed by : ( ) , ( ) , ( ) , ( ) and ( ) .the effect of air turbulent velocity fluctuations on droplet motion in bin models may be represented as an enhancement of collision efficiencies derived from a high resolution turbulence simulations ( ) , ( ) , ( ) .this method has been used recently by ( ) , ( ) .also , a recently reported model with lagrangian microphysics ( ) uses this approach .because in the lagrangian microphysics the parcel velocity is a predicted variable , it can be used with the parameterization of the sub - grid scale velocity fluctuations to determine the relative velocity of the colliding parcels .this article reports an application of a two possible representations ( parameterizations ) of turbulence for a model with microphysics formulated in a lagrangian framework , one - random walk model , and the other interpolation of the velocity to parcel location and investigates its effect on a droplet spectrum .the results from these models are compared with the results from the model where parcel terminal velocity is used when calculating probability of droplet collisions .when for a parcel predicted by the model velocity is used to calculate probability of droplet collisions it is assumed that droplet turbulent transport and interaction occur in the same scales , because the same velocity is used for both .assumption that colliding droplets have terminal velocity corresponds to the case , where turbulent transport and interaction ( collision - coalescence ) between droplets happen in a different scales .this gives a lower and an upper limit of the effect of the turbulence on droplet collisions , because in former case it is consistent with the turbulent transport model and in latter is neglected .the next section describes numerical model and representation of turbulence .results are discussed in section 3 , and conclusion are in the last section , 4 .in this article the eulag model is used as a driving model for lagrangiam microphysics .eulag is an eulerian / semi - lagrangian solver ( ( ) , ( ) ) , with the eulerian version used in the simulations reported in this article . model equations in an anelastic approximation can be written as : where is density , diffusion coefficient , - any dependent variable ( u , w , , ) with its associated eulerian forcing - , describes forces from a lagrangian model and are defined below . is derived from a prognostic tke equation ( ) , see details about implementation in ( ) .lagrangian microphysics tracks lagrangian parcels , each representing a group of real aerosol particles .each parcel can be characterized by the same velocity and size ( both dry aerosol radius and radius of the droplet if water condenses on the parcel ) , and occupy approximately the same place in a physical space .lagrangian model equations describing evolution in time of the droplet radius ( ) , velocity ( ) and location ( ) for a parcel can be written as : where -droplet radius , - supersaturation at parcel location , - equilibrium supersaturation for given aerosol size and temperature ( see ( ) for detail description of other symbols ) . the - deterministic air velocity and - turbulent component determined from the sub - grid scale model ; is gravity .equation [ r_grow ] is solved using vode solver ( ( ) ) , equation [ u_eq ] is solved using the backward euler method , and equation [ x_eq ] is solved using the forward euler method .one of the established ways to represent turbulent component of the flow velocity is by a random process ( random walk ) having normal distribution and standard deviation , which is derived in the appendix .note that to determine different mixing length than to determine in eq .[ eq1 ] was used .although the mixing length of the order of model grid size is typically used in sub - grid scale model , in the lagrangian model because the location of each parcel within the grid is known , the mixing length based on and can be derived .this , simplified , description of the turbulence treats turbulence as a random walk process and represents the effect of air turbulent velocity on lagrangian parcels velocity through the variability in the eulerian model velocity field within a computational grid .a broad review of theory and application of lagrangian transport models can be found in ( ) .no attempt has been made to represent eddies present in the turbulent flow , but not resolved by the model ; it is assumed that at each time - step turbulent fluctuations of the air velocity are independent .it is achieved by generating a new random number for each lagrangian parcel on each time - step .this method follows an approach proposed by ( ) .the other method of representing variability of the velocity within the computational grid is to use interpolation ; and instead of using the same value of the velocity for all parcels within a given computational grid the velocity can be interpolated from neighboring grids to each parcel location .these two approaches , later referenced as a turbulence , provide a way to describe the variability of the air turbulent velocity inside computational grids in the spirit discussed by ( ) for a microscopic supersaturation . details of representation of the coalescence process in lagrangian microphysics is described in detail in ( ) and ( ) .collisions of all lagrangian parcels within the collision grid ( this does not have to be the same as eulerian grid , in the simulations reported in this article collision grid is a quarter of a computational grid ) , with sizes larger than 3 m and having water on the surface are considered , and ( ) analytical expression for gravitational collision efficiency is used .each collision event , based on the size of colliding droplets and aerosol size inside the droplets is assigned to one of the pre - defined microphysical grids spanning both aerosol sizes and droplet sizes .for each microphysical grid the mass of the aerosol , the mass of the water and the number of new droplets is calculated .based on this information new parcels are created for each microphysical grid for which the number of physical droplets is larger than specified threshold level - 156.25 in simulations discussed in this article ( which corresponds to resolving 1 droplet / m ) .newly created parcels for collision grids at the edge of the cloud are placed next to randomly chosen parcels within this grid . for grids within the cloudthey are randomly placed within the eulerian computational grid .additionally the algorithm assures also that existing parcels represent a larger than threshold level number of physical particles .should the collision lead to a smaller number , the probability of collision , with all other parcels for this particular one is reduced .this representation of coalescence process allows to process not only droplet sizes , but also aerosol sizes during droplet coalescence .it is assumed that newly created lagrangian parcels move with terminal velocity .drag force and tendencies for a temperature and water vapour mixing ratio are calculated in lagrangian model and used in an eulerian part : where indexes , , index numerical model computational grids , is the number of real particles parcel represents .a and are temperature and potential temperature profiles .treatment of these forces is similar to the treatment of the sub - grid scale tendencies as discussed by ( ) .a finite - difference approximation to eq .[ eq1 ] can be written as ( ( ) ) : where , are forces associated with pressure gradient , absorbers , and buoyancy ( ( ) , ( ) ) . - denotes advection , which is calculated using mpdata scheme ( ( ) , ( ) . in this article a 2d idealized setup is used to investigate the effect of air turbulent velocity on drizzle formation .initially , an atmosphere with a = 1.3 was specified . below 2 kma relative humidity was defined as 85% , dropping to 75% above this level .the model domain covers 3.2 km 5.0 km , resolved with 25 m resolution in each direction .three - modal , log - normal aerosol distribution , corresponding to continental air , have been specified for the whole domain using values from ( ) : = 0.008 m , =1.6 , =1000 , = 0.034 m , =2.1 , = 800 , = 0.46 m , =2.2 , =0.72 . an initial aerosol distribution in each computational grid was represented by 100 parcels .the aerosol spectrum used in the simulations is shown in figure [ fig1 ] . aside from a spectrum averaged over the whole domain , the standard deviation of the values for each bin is also plotted .variability in each bin is a result of a random sampling of the initial distribution with finite number of parcels .although on average the spectrum agrees with an analytical distribution , sampling with a limited number of parcels leads to a variability of the distribution for a different grids and as a result it affects droplet number and size in each model grid during the condensational growth .the model was forced by the surface temperature source prescribed as : with =0.15 k / s and ( , ) set to ( 0,400 ) .simulations have been run for 1080s , with a time - step 0.25s .microphysical grid in aerosol space ( in ) was specified as : , with 30 bins . in radius space 28bins were used , and radius for a bin was specified as : , with r(1)=1 and =2 .the following model setups are discussed in details in this article : - reference simulation .no representation of turbulence in eq .[ u_eq ] ( v=0 ) .deterministic flow velocity u at parcel location is determined by interpolating velocity from 4 ( 8 in 3d ) nearest eulerian computational grids to a parcel location .the vertical component of the parcel velocity is used to evaluate collision kernel . - as far as parcel movement is concerned it is assumed that parcel velocity is equal to the air velocity interpolated to a parcel location , that is eq .[ u_eq ] is not solved .the terminal velocity of the parcel is added to the vertical velocity .the parcel s terminal velocity has been determined from the expression given by ( ) . to simplify calculations instead of the terminal velocity for a parcel size , terminal velocity for the center of the collision bin to which the parcel is assigned is used .when the collision kernel between parcels is calculated it is assumed that the parcel velocity is equal only to a terminal velocity .this setup mimics bin model as far as collision - coalescence process is concerned and assumes that turbulent transport and collision happen in different scales ( are independent ) . - for a given parcel deterministic flow velocity has 2 components deterministic : , where u(*x * ) is eulerian velocity for the computational grid to which parcel belongs , - diffusion coefficient for a lagrangian model , and - gas density ; and turbulent component v= with the derivation of of this model presented in the appendix . - similar to , but in this case in additional to the difference in vertical velocity also difference in horizontal velocity is taken into account when evaluating the collision kernel .figure [ fig2]a-[fig2]c , shows snapshots of a cloud water mixing ratio ( ) for the model solution for times 8 min .( fig . [ fig2]a ) , 15 min ( fig . [ fig2]b ) and 17 min ( fig .[ fig2]c ) for the run .cloud evolution exhibits features typically found in a 2d cumulus developing in a stable stratified atmosphere and reported already by : ( ) , ( ) , ( ) . after reaching condensation level water vapor condenses and forms the cloud; air continues moving upward due to the temperature excess within the thermal . during the ascent in a stably stratified air cloud mixes with the environmental air , forming entrainment eddies . the cloud water mixing ratio reaches values as large as 12 g / m during the simulation , and at the end of the simulation the maximum vertical velocity is around 18 m / s . the cloud water mixing ratio shows relatively large variability in space , being a result of the fluctuations of the parcel number within the computational grid and also because of the variability in the randomly generated aerosol spectrum for each model grid .because in the numerical model the coalescence process is present , coalescence of the droplets in time leads to a drizzle formation .figure [ fig2]d and [ fig2]e show evolution in time of the q - the rain water mixing ratio for a case . a 50 droplet radius is defined to be a border between a cloud and a rain droplet sizes .large droplets reside near the edge of the cloud , and at the end of the simulation exceeds 3 g / kg .this behavior is typical for the three setups : , and . for a case negligible q forms .the edge of the cloud is a place where the difference in the vertical velocity within the collision grid is the largest ( either because of the gradient of velocity between the interior and exterior of the cloud for or , or because tke , used to derive stochastic velocity perturbations is largest ) .additionally at the edge of the cloud the droplet spectrum can be broader than in the core of the cloud , which also enhances coalescence between droplets .the width of the droplet spectrum increases near the edge of the cloud because : a ) the gradient of the supersaturation is largest there and as a result the interpolation of the thermodynamical parameters to a parcel location within the same computational grid different droplet populations may grow and evaporate at the same time depending on the distance from the edge of the cloud ; b ) vertical velocity changes sign near the cloud edge ( in figure [ fig2 ] the solid line shows a contour of the value 0 ) , as a result droplets evaporate in a down - drafts near the cloud edge ; and c ) because largest droplets , formed in the center of the up - draft , after reaching cloud top start moving along cloud edge .evolution of the twp ( total water path ) and rwp ( rain water path ) in time for cases discussed are shown in figure [ fig3 ] . during the early stage of the cloud development twpis very similar for all cases . in time , however , differences appear as a result of the interaction of the cloud and flow dynamics .the largest amount of water is for and and smallest for .the decrease in a twp for a case can be associated with a random velocity of the air within the grid . as a result evaporation at the edge of the cloudmay be larger for this case , because droplet trajectories can be different than for other three cases when velocity is interpolated to a parcel location .much larger differences are observed for the rwp .the largest amount of water is for the case , the simulation where in coalescence calculation full velocity is taken into account .compared to simulation , has around 15% more rwp . for the simulation negligible amount of rwp has formed during the simulation time .but undoubtedly the coalescence process is active for this case also and droplets as large as 40 have been formed in this simulation .these droplets are much smaller than for the three other simulations , where sizes up to 370 are present. vertical profiles of the mean radius ( ) , standard deviation of cloud droplet distribution ( ) , cloud water mixing ratio ( ) and number of cloud droplets are shown in figures [ fig4 ] for a case ( with very similar statistics observed for a and simulations ) and for a case in figure [ fig5 ] for a 3 times : 6 , 8 , and 15 minutes from the beginning of the simulation .these 3 times show cloud characteristics at the very early stage of the cloud development ( the time when becomes larger than 0 ) , through the initial formation of the drizzle ( space distribution after 8 min .is shown in figures [ fig2]b and [ fig2]d ) , to the moment when significant drizzle develops ( plots [ fig2]c and [ fig2]e ) .these diagnostics were calculated for each model level by taking into account only model grids , where was larger than 10 g / kg . figures [ fig4 ] and [ fig5 ] show a very similar development of the cloud for both cases . initially a small amount of water condenses , but at the same time the mean radius is around 5 and standard deviation around 1 .around 1/3 of the total aerosol concentration activates at this stage . in time , the cloud thickens and after 8 minutes the cloud base moves upward .the mean droplet radius increases with height and reaches 12 - 13 near the cloud top after 15 minutes at that time the number of cloud droplets decreases with height for a case from 500 cm to 250 cm . for the simulation ,the cloud droplet concentration does not change significantly with height and oscillates around 500 cm .another difference between these 2 cases is in the standard deviation of the cloud droplet distribution after 15 min . above 2.5 km , and forthe case standard deviation is around 1.5 , except at the cloud top , where it reaches 4.2 . for caseit is of the order of 3.5 , with the value near the cloud top of . at earlier times and below 2.5 kmstandard deviation is very similar for both cases .because for each parcel information about droplet size and aerosol size is available , the relation between aerosol sizes and droplet sizes can be derived .figure [ fig6 ] shows this relation mapped on a eulerian microphysical grid after 18 minutes .the relation between aerosol size and droplet size is complex and for a given aerosol size there is a broad range of droplet sizes formed on it . because initially aerosol sizes were limited to 1 , sizes larger than that were created in a coalescence process .much larger sizes of both aerosol and droplet are present for a case , where the coalescence is more intensive . from observations or spectrum resolving ( bin ) modelstypically information about either aerosol distribution or droplet distribution is derived .this information for microphysics in a lagrangian framework is obtained by integrating relation shown in figure [ fig6 ] along one of the dimensions . averaged over the whole cloud the droplet spectrum is shown in figure [ fig7 ] .after 9 minutes - figure [ fig7]a the droplet spectrum is very similar for all cases . in time , similar to other cloud properties the differences between and , and emerge ( [ fig7]b , [ fig7]c , [ fig7]d ) . formation of the large droplets for the simulation is much slower than for the remaining three cases . in an early stage of cloud development the largest droplets form for the case . at the end of the simulation , the spectra for , and cases are very similar .evolution of an aerosol spectra averaged over the whole cloud is presented in figure [ fig8 ] . because coalescence is processing not only droplet sizes , but also aerosol sizes , in time the aerosol spectrum also changes .there are indications , especially for later times , that aerosol is processed during droplet coalescence , but the differences between ( where aerosol processing is negligible ) and other three simulations are small .fastest aerosol processing is for a simulations , with a smaller , for a and .much more time is needed for the coalescence to process aerosol and form large / giant aerosol than the length of an idealized simulation discussed in this article .there is , however , evidence that this aerosol processing ( eg . ( ) ) may be an important source of large aerosol , because within the minutes maximum aerosol size has doubled , reaching 2.5 radius ( see fig . [fig6]a ) compared to 1 initially .the increase of the concentration of the large aerosol is at the expense of the aerosol having sizes between 0.01 and 0.2 .the collision kernel , describing the probability of the coalescence of two colliding droplets can be written as : where r,r is the radius of a droplet / with the corresponding velocity v/v , e is the gravitational collision efficiency . with the equation of motion solved for each parcel in the model , diagnostics of the relative velocity of colliding droplets can be determined .equation [ r_call ] can be rewritten in the following way : with the w/w being the terminal velocity for a parcel .an enhancement of the gravitational collision efficiency due to the turbulent velocity fluctuations can be defined as : , where / is the velocity of colliding parcels , either vertical only or vertical and horizontal for a , and - averaging operator .although it is possible to calculate this enhancement for each pair of droplets it is easier to use a bin structure the same as was used to map collisions between parcels , and in such a case / represent a terminal velocity for the center of the bin rather than for an individual parcel , and averaging is done for each bin .figure [ fig9 ] shows a enhancement of the gravitational collision efficiency .four panels show for a different bin sizes : 2.6 ( microphysical bin 4 ) - figure [ fig9]a , 9.1 ( microphysical bin 10 ) - figure [ fig9]b , 28.8 ( microphysical bin 16 ) - figure [ fig9]c , 94.3 ( microphysical bin 22 ) - figure [ fig9]d .the largest enhancement is for small bins with a value reaching 125 for a simulation .simulation has value 120 and ref almost 105 .expectedly , the simulation has a constant value of - 1 independent on size , because the vertical velocity in this case is a droplet terminal velocity . with the increasing dropletssizes the departures of enhancement factor from 1 is smaller , and for bin 22 figure [ fig10]d it approaches 2 . for a given bin, the largest is found for adjacent bins. for bins separated by a large distance is much smaller .the distribution of the is non - symmetric , with larger values of found for the sizes smaller than bin under consideration .the results presented in figure [ fig9 ] show that even small differences in air velocity can affect velocity statistics for droplets having similar sizes and as a result moving with a similar velocities . for droplets having large differences in sizesturbulent velocity would have to be significantly larger than terminal velocity of larger droplets to influence , which is not a case in the simulations discussed in this article .the large values of the enhancement of the gravitational collision efficiency for small droplets are assiciated with the fact that those droplets adjust quickly to the environmental flow , and as a result the velocity of small droplets is very similar to the velocity of the air .much smaller values of the enhancement of the gravitational collision efficiency for large droplets are because these droplets need more time to adjust to the environmental flow and their terminal velocity is of the same order as air velocity fluctuations .diagnostics show that the maximum turbulent velocity for a case is around 3 m / s and has the same order as a terminal velocity of the 650 droplet - .5 m / s .note , however , that 3 m / s represents the largest value and standard deviation of the turbulent velocity for the whole cloud is .1 - 0.25 m / s .an example of the velocity fluctuations for run and are shown in figure [ fig10 ] .there is a significant difference between the statistics for these two cases . for a case distribution of and almost identical . for the case anisotropy of the velocity fluctuationsis observed , and tails for a distribution is much broader than for a , with the statistics for case laying in - between and for a .in this article two possible representation of the air turbulent velocity in a numerical model with a lagrangian representation of microphysics are discussed . air turbulent velocity in the modelis represented either as a random walk process , with the standard deviation of velocity fluctuations derived from the diffusion coefficient predicted by the numerical model ( eulerian part ) or as an interpolation of an air velocity to a parcel location .the random walk model is derived for an anelastic approximation and it s shown that an additional , deterministic , term needs to be included in a random walk model for consistency with the eulerian model .it is argued that the mixing length used in a lagrangian model should be smaller than used to derived diffusion coefficient for an eulerian model ; and the mixing length scale based on tke and model time - step for use in lagrangian microphysics is introduced .it is demonstrated that interpolation of the velocity to parcel location and use of this velocity in a collision kernel has a similar effect to representation of the turbulence as a random walk , and can be treated as an alternative to a random walk model .additionally , for turbulence treated as a random walk , because all parcels within the particular computational grid have the same value of the deterministic velocity , fluctuations in the parcel number in a computational grid may be larger than for the case when velocity is interpolated to a parcel location .unlike the random walk model , interpolation takes into account possible anisotropy in the flow velocity , also observed in a laboratory studies ( ) , ( ) but in a much smaller scale .if the sub - grid scale transport and droplet collisions occur on the same scale , air turbulent velocity can significantly enhance velocity differences between droplets , especially those having small sizes , when the turbulent velocity is much larger than terminal velocity of these droplets .in the cases discussed in this article enhancement as large as 120 has been observed for the case when only vertical velocity is taken into account when calculating relative velocity of colliding parcels . allowing for the differences in a horizontal velocity increases by % .the turbulent enhancement of the velocity of the colliding droplets obtained in this article is much larger than obtained form a dns simulations and used in les model by ( ) or ( ) .this difference is associated with the fact that in lagrangian microphysics the same velocity is used for the transport and when evaluating collision kernel .the values obtained with air turbulent velocity fluctuations ( , ) provide an upper limit of the impact of the air velocity fluctuations on gravitational collision efficiency , with the lower limit given by values from the bin simulation .an additional parameterization is needed to accont for scale separation between transport and droplet interactions .velocity enhancement diagnosed from the model is non - symmetric , with larger values for sizes smaller than size under consideration .the asymmetry , however , may be related to the microphysical grid , because in e averaged within a bin parcel velocity difference is normalized by a difference in the terminal velocity for bins these parcels belong to . as a result of the averaging and normalization e also depends on the number of bins used to represent domain in a radius space .figure [ fig11 ] shows e for a case together with an additional 2 simulations using the same setup , but one with 55 bins ( p=2 ) , and other with 108 bins ( p=2 ) . with the increasing number of bins e also increases , reaching the value of 900 when the smallest bin is under consideration .a e values approach 1 independent on resolution in a radius space when the differences in droplet sizes is large .turbulent velocity enhancement can affect significantly drizzle formation .simulation , where droplet vertical velocity has been set to a terminal velocity , based on the bin to which droplets belong produces negligible amount of rain water , and much smaller than for other simulations droplet sizes .note , however , that even in this case droplets do collide forming larger ones . for simulations including a representation of air turbulen velocity fluctuations , drizzle forms initially preferentially near the cloud edge ,near the entrainment eddies or near the cloud top , and often in the areas where q is elevated .the edge of the cloud is a place where due to entrainment and mixing droplet spectrum is broader than in the center of the cloud and as a result coalescence between droplets is more efficient .additionally , because of the gradient of the velocity near the cloud edge , the differences in the relative velocity of parcels there is large and this also enhances probability of droplet coalescence .formation of the first drizzle near the cloud edge it a bin model with the representation of the effect of turbulence on droplet collision rate has been also reported recently by ( ) .the cloud droplet spectrum is relatively broad ( of the order of from the onset of the cloud formation .this value is much larger than reported for parcel models in the past and large enough to trigger the coalescence process even for the case with a very high cloud droplet concentration .it s acknowledged , however , that when the standard deviation for each computational grid is considered values smaller than 0.1 m are observed , indicating that very narrow ( comparable to a parcel model ) droplet distributions also form within computational grids .coalescence does process aerosol , but the time scale of this process is longer than minutes of cloud evolution discussed here .processing by multiple clouds is needed for the boundary layer aerosol to change the shape of the aerosol distribution .consider conservation equation for a scalar for an anelastic approximation : where - is concentration , - gas density , - gas velocity , - diffusion coefficient ( note that in general is a tensor , however , here it is assumed that this tensor has only diagonal values not equal to 0 ; k= ) .this equation can be written as : +\triangle(\rho k c).\ ] ] this equation corresponds to the following stochastic differential equation in ito sense ( see for instance : ( ) or ( ) for the case when is constant ) : where is a wiener process .integration of this equation gives : or in numerical representation for i - th direction , assuming that , and are constant during the time step : where is a random number having normal distribution .it follows that effectively turbulent diffusion corresponds to a random walk process with mean velocity having two components , first being the velocity from the numerical model , the other accounting for a change in the density and diffusivity in a non - homogeneous medium : and a random component : to find in numerical model an assumption must be made about the mixing length , . typically it is assumed to be of the order of a grid length .although in the eulerian model this assumption is justified , because mixing within each model grid must be completed within the time - step , in a lagrangian transport model , where the exact location of each parcel is known it s not necessary true . for a lagrangian modela different length scale can be used , for instance from model time - step and turbulent kinetic energy ( tke ) , which is a prognostic variable in a sub - grid scale model .the following length scale can be defined : = , which for a 2d case discussed in this article is equal to = .it follows that this length scale is used only in a lagrangian model to calculate diffusion coefficient and next the standard deviation of the velocity fluctuations and mean deterministic velocity .ackerman , a. , toon , o. , hobbs , p. , 1995 . a model for particle microphysics , turbulent mixing , and radiative transfer in the stratocumulus - topped marine boundary layer and comparisons with measurements .52 , 12041236 .andrejczuk , m. , grabowski , w. , reisner , j. , gadian , a. , 2010 .cloud - aerosol interactions for boundary - layer stratocumulus in the lagrangian cloud model .j. geophys .115 , d22214 , doi:10.1029/2010jd014248 .andrejczuk , m. , reisner , j. m. , henson , b. f. , dubey , m. , jeffery , c. a. , 2008 .the potential impacts of pollution on a nondrizzling stratus deck : does aerosol number matter more than type ?113 , d19204 , doi:10.1029/2007jd009445 .devenish , b. j. , bartello , p. , brenguier , j. , collins , l. r. , grabowski , w. w. , ijzermans , r. h. a. , malinowski , s. p. , reeks , m. w. , vassilicos , j. c. , wang , l .- p . ,warhaft , z. , 2012 .droplet growth in warm turbulent clouds .quarterly journal of the royal meteorological society 138 ( 667 ) , 14011429 .goke , s. , ochs , h. t. , rauber , r. m. , 2007 .radar analysis of precipitation initiation in maritime versus continental clouds near the florida coast : inferences concerning the role of ccn and giant n uclei . j. atmos .64 , 36953707 .khain , a. p. , pokrovsky , a. , pinsky , m. , seifert , a. , phillips , v. , 2004 .simulation of effects of atmospheric aerosols on deep turbulent convective clouds using a spectral microphysics mixed - phase cumulus cloud model .part i : model description and possible applications .61 , 29632982 .klaassen , g. p. , clark , t. l. , 1985 .dynamics of the cloud - environment interface and entrainment in small cumuli : two - dimensional simulations in the absence of ambient shear .42 , 26212642 .korczyk , p. m. , kowalewski , t. a. , malinowski , s. p. , 2012 .turbulent mixing of clouds with the environment : small scale two phase evaporating flow investigated in a laboratory by particle image velocimetry .physica d : nonlinear phenomena 241 ( 3 ) , 288 296 , special issue on small scale turbulence .lasher - trapp , s. g. , cooper , w. a. , blyth , a. m. , 2005 .broadening of droplet size distributions from entrainment and mixing in a cumulus cloud .quarterly journal of the royal meteorological society 131 , 195220 .leroy , d. , wobrock , w. , flossmann , a. i. , 2009 .the role of boundary layer aerosol particles for the development of deep convective clouds : a high - resolution 3d model with detailed ( bin ) microphysics applied to crystal - face .91 , 6278 .malinowski , s. p. , andrejczuk , m. , grabowski , w. w. , korczyk , p. , kowalewski , t. a. , smolarkiewicz , p. k. , 2008 .laboratory and modeling studies of cloud clear air interfacial mixing : anisotropy of small - scale turbulence due to evaporative cooling .new journal of physics 10 ( 7 ) , 075020 .riechelmann , t. , noh , y. , raasch , s. , 2012 . a new method for large - eddy simulations of clouds with lagrangian droplets including the effects of turbulent collision. new journal of physics 14 ( 6 ) , 065008 .shima , s. , kusano , k. , kawano , a. , sugiyama , t. , kawahara , s. , 2009 .the super - droplet method for the numerical simulation of clouds and precipitation : a particle - based and probabilistic microphysics model coupled with a non - hydrostatic model .135 , 13071320 .solch , i. , karcher , b. , 2010 . a large - eddy model for cirrus clouds with explicit aerosol and ice microphysics and lagrangian ice particle tracking. quarterly journal of the royal meteorological society 136 ( 653 ) , 20742093 .http://dx.doi.org/10.1002/qj.689 spivakovskaya , d. , heemink , a. w. , deleersnijder , e. , 2007 .the backward to method for the lagrangian simulation of transport processes with large space variations of the diffusivity .ocean science 3 ( 4 ) , 525535 .twohy , c. h. , anderson , j. r. , toohey , d. w. , andrejczuk , m. , adams , a. , lytle , m. , george , r. c. , wood , r. , saide , p. , spak , s. , zuidema , p. , leon , d. , 2013 .impacts of aerosol particles on the microphysical and radiative properties of stratocumulus clouds over the southeast pacific ocean . atmospheric chemistry and physics 13 ( 5 ) , 25412562 .wang , l. , franklin , c. n. , ayala , o. , grabowski , w. w. , 2006 .probability distributions of angle of approach and relative velocity for colliding droplets in a turbulent flow .63 , 881900 .
|
this article discusses a potential impact of turbulent velocity fluctuations of the air on a drizzle formation in cumulus clouds . two different representations of turbulent velocity fluctuations for a microphysics formulated in a lagrangian framework are discussed - random walk model and the interpolation , and its effect on microphysical properties of the cloud investigated . turbulent velocity fluctuations significantly enhances velocity differences between colliding droplets , especially those having small sizes . as a result drizzle forms faster in simulations including a representation of turbulence . both representations of turbulent velocity fluctuations , random walk and interpolation , have similar effect on droplet spectrum evolution , but interpolation of the velocity does account for a possible anisotropy in the air velocity . all discussed simulations show relatively large standard deviation ( ) of the cloud droplet distribution from the onset of cloud formation is observed . because coalesence processes aerosol inside cloud droplets , detail information about aerosol is available . results from numerical simulations show that changes in aerosol spectrum due to aerosol processing during droplet coalescence are relatively small during min . of the cloud evolution simulated with numerical model . drizzle forms initially near the cloud edge , either near the cloud top , where the mass of water is the largest , or near the entrainment eddies . turbulence , cloud - aerosol interactions , warm rain formation , lagrangian microphysics
|
the problem of distinguishing a chaotic from an ordered trajectory in a non - integrable hamiltonian system has been a topic of active investigation since the pioneering work of hnon and heiles ( 1964 ) .initially , when the study was restricted to 2-d systems , the work was done through surface of section plots .later , when systems with more than 2-d were considered , the method of choice was the calculation of the lyapunov characteristic numbers ( lcns ) ( benettin et al .1976 , froeschl 1984 ) .unfortunately both the above methods suffer from the same drawback , namely they are not able to distinguish easily an ordered from a `` sticky '' chaotic trajectory .various methods have been devised since then to address the above problem , namely the distinction of an ordered from a chaotic trajectory using a relatively short - time trajectory segment .these methods generally fall into two main classes : those that use frequency or correlation analysis of a time - series , constructed by the values of generalized co - ordinates ( or functions of them ) , and those that use the geodesic divergence of initially nearby trajectories . in the first classbelong the `` old '' method of the rotation number ( contopoulos 1966 ) , the frequency map analysis developed by laskar ( laskar et al .1992 , laskar 1993 ) and the power spectrum analysis of quasi - integrals developed by voyatzis & ichtiaroglou ( 1992 ) . in the second belong the probability density function analysis of stretching numbers developed by froeschl et al .( 1993 ) and voglis & contopoulos ( 1994 ) and the fast lyapunov indicators method developed by froeschl et al .each one of the above methods has its own advantages and weaknesses ; in particular some of them are more suitable to test _ large sets of trajectories _ rather than _ single ones _, some are more efficient for _ 2-d systems _ rather than for _ n()-d systems _ and some perform better for _ mappings _ rather than _flows_. in 1997 ,contopoulos & voglis introduced a new method for distinguishing chaotic from ordered trajectories , which does not belong to any of the above mentioned two classes but , instead , may be classified as `` mixed '' .this new method is based on the analysis of the probability density of _ helicity _ and _ twist angles_. voglis & efthymiopoulos ( 1998 ) and subsequently froeschl & lega ( 1998 ) showed that the twist angles method is very efficient in testing _phase space regions _ , at least in cases of 2-d systems , where the twist angles can be easily calculated .more recently voglis et al .( 1998 , 1999 ) proposed two new and very efficient methods , namely the method of `` dynamical spectral distance '' ( dsd ) , which is particularly suitable for the characterization of single trajectories in 4-d maps , and the method of `` rotational tori recognizer '' ( rotor ) which is very efficient for testing wide areas of the phase space of 2-d maps . in the present paperwe are introducing a new mixed " method , which we show that it is at least as sensitive as the other methods in the literature , may be applied in a straightforward way to dynamical systems with more than two degrees of freedom and is equally efficient for single trajectories as well as large sets of them .the method consists in analyzing a time series constructed by the values of the geodesic deviation of nearby trajectories recorded at a properly selected frequency .it should be noted that a method based on a similar technique has been proposed by lohinger and froeschl ( 1993 ) .the paper is organized as follows .section 2 describes the basic features of the method , which , in section 3 , is tested upon three dynamical systems of different types .a comparison of the results of our method to those derived by various other methods is made in section 4 .section 5 treats the application of the proposed method to one of the most important problems of solar system dynamics , the motion of asteroids .finally in section 6 we present our conclusions .an ordered trajectory of a n - d conservative dynamical system will lie on an invariant torus , i.e. a n - d manifold of the 2n - d phase space .any such trajectory is , in general , quasi - periodic and covers densely the invariant set . if a nearby trajectory is started at an infinitesimal distance from the previous one , this too would , in general , lie on an invariant torus , so that the time series should behave in a quasi - periodic manner . on the other hand , a chaotic trajectory visits different regions of phase space in a stochastic manner and should also be `` random '' . following the above considerations we calculate the _ power spectrum _ of the time series .we first calculate the discrete fourier transform of the multiplied by a _ window function _ then the power spectrum , , is defined in frequencies as where and is the nyquist frequency defined as the frequencies covered by the power spectrum are in the present work we used the so called `` hanning '' window .more details on the calculation of the power spectrum can be found in the book by press et .1992 ) . 0.1 cm as a rule mixed " methods are expected to perform better in distinguishing between ordered and chaotic trajectories .the reason for that is that time series that are constructed from the geodesic divergence of nearby orbits contain all the various characteristic frequencies that locally affect the motion in the proper " ratio , i.e. the frequencies corresponding to the different directions ( degrees of freedom ) are properly weighted.in particular now , as far as our method is concerned , the power spectrum of divergences ( psod ) of an ordered trajectory is expected to posses only a few spikes " at specific frequencies .the number of harmonics however depends on the system under consideration as well as on the values of its `` controll '' parameters ( see below ) .in contrast , the psod of a chaotic trajectory should appear continuous , due to the random nature of the time series .however , the above considerations lie behind all methods based on time series analysis .what is really important , for the assessment of the new method with respect to the other ones appearing in the literature , is to evaluate ( a ) its independence from the number of degrees of freedom of the dynamical system , ( b ) its effectiveness with respect to the object of test ( single trajectories or distributions of initial conditions covering wide phase space regions ) ( c ) its sensitivity , i.e. the minimum length of the time series , necessary to distinguish a sticky chaotic trajectory from an ordered one and ( d ) its ability to produce a well - defined measure of chaos ( as is the lcn ). 0.1 cmwe proceed in the assessment of the method using three different dynamical systems , namely a 2-d mapping as well as a 2-d and a 3-d hamiltonian system . in each one we evaluated the nature of a considerable number of trajectories . in the following subsections we present only three or four trajectories per system , which we think that are typical examples of the three different classes of trajectories , i.e. ordered , clearly chaotic and sticky .the amplitudes of the psod , in all figures , are normalized so that the highest has the value one , while the frequency is given in cycles per time unit .we first test the method in the simple 2-d mapping where the stochasticity parameter is taken equal to 0.7 .we present here the results of three trajectories , an ordered one ( _ map1 _ ) starting at , , a stochastic one ( _ map2 _ ) starting at , and a sticky one ( _ map3 _ ) starting at , .the initial conditions of the third one place it very close to the boundary between the ordered region , surrounding the stable point , , and the chaotic sea .figure [ map_invar ] shows the consequents ( , ) of the `` sticky '' trajectory at various times ( iterations ) .as we can see , for up to 2048 iterations the trajectory behaves like an ordered one .around it starts to present some signs of irregularity and finally , after , the chaotic nature of the trajectory becomes evident .using our method on these three trajectories , with the length of the time series being n=1024 iterations and defining , we obtain the spectra shown in fig .[ map_psod ] .the upper left frame of fig .[ map_psod ] is the spectrum of the ordered trajectory .the spectrum consists of some basic frequencies while the `` noise '' is at a very low level . on the contrary, the spectrum of the chaotic trajectory , in the upper right frame , covers the whole frequency range with comparable amplitudes , i.e. a clearly continuous spectrum .looking at the spectrum of the `` sticky '' trajectory ( lower left frame ) , we see a pattern almost the same as that of the chaotic one .some high - amplitude spikes are evident but the continuum noise level is again very high for the whole frequency range , denoting the stochastic nature of the trajectory . even with much less iterations ( n=256 - lower right frame of fig .[ map_psod ] ) we get the same result .the spectrum is less dense , since it has a smaller number of frequencies than before ( see eq.([eq_freqs ] ) ) , but the basic features are the same .note that for the lcn , which is the limit of the function as ( see fig .[ map_lcn ] ) , or the plot of the consequents of the mapping ( fig .[ map_invar ] ) , we need much more than 1024 iterations in order to decide whether the trajectory is chaotic or not . 0.1 cm the above presented results show that it is , indeed , worth to consider the new method as a useful tool in the assessment of the nature , ordered or chaotic , of a trajectory .however the method , as it is , does not entail a _clear _ and _ easy to apply _ criterion for the classification of a trajectory as ordered or chaotic . herewe try to improve somehow the presentation of the results of our method , in order to propose such a criterion .note that this new criterion is similar , graphically , to the criterion of the fli proposed by froeschl et al ( 1997 ) .if the peaks appearing in the psod are plotted in descending order of amplitude , we have a graphical representation of how many strong frequencies the spectrum possesses .figure [ map_psod_sort ] shows this representation of the psod for the three trajectories studied in the previous sub - section .ordered trajectories have only a few high amplitude frequencies and the background is formed by peaks whose amplitudes are more than four orders of magnitude smaller than that of the basic frequency . on the other handthe stochastic trajectories possess only a few high - amplitude frequencies and the largest part of the spectrum consists of a `` continuum '' of frequencies with also considerable amplitude . in this respectone may chose to represent the results by a signle number ( e.g. the number of peaks up to a certain amplitude ) , provided that a certain `` threshold '' for the noise level is chosen ( 10 in fig .this value will , however , depend on the system at study .0.1 cm 0.1 cm any method of analysis of finite - sample time series is bound to suffer from noise .thus , even in the psod of a regular orbit a certain noise level is expected .this is mainly due to `` leakage '' of power from the frequency lobes , a side effect of the calculation of the power spectrum using discrete fast fourier transform methods . when calculating the power spectrum of a `` monochromatic '' signal , the power contained in its basic frequency `` leaks '' into neighboring frequencies .the leakage depends on the windowing function used . when a signal possesses two closely spaced frequencies ,all frequencies in between these two will also gain considerable amplitudes , due to this phenomenon .if the psod of a regular orbit has a large number of basic ( strong ) frequencies , then this effect can lead us to falsely identify it as chaotic .the problem can be tackled at the cost of taking more points in the sample . in this way, the number of frequencies appearing in the spectrum is increased but the frequency lobes become thinner . thus , lobe overlapping is reduced and the noise level drops . for chaotic orbits on the other hand , the observed noisy pattern is an inherent property of the spectrum and by increasing the number of points one can not alter the picture .the above can be seen in fig .[ noise1 ] , which shows the psod of a regular ( left column ) and a chaotic trajectory ( right column ) for three different values of n , i.e. n=256 ( top ) , n=4096 ( middle ) and n=65536 ( bottom ) .as we see , the noise level in the case of the regular orbit drops significantly when n is increased , while in the case of the chaotic orbit it remains more or less unchanged . in the n=65536 case the noise level for the ordered trajectory drops below , becoming comparable to the accuracy of the fft calculation ( double precision ) .this phenomenon is better seen if we use the amplitude - sorted psod .[ noise2 ] shows the amplitude - sorted psod of the regular ( left ) and chaotic ( right ) trajectory with n=256 , 1024 , 4096 , 16384 and 65536 .while the noise level in the psod of the ordered trajectory is reduced when n is increased , it remains the same for the chaotic one .0.1 cm 0.1 cm 0.1 cm 0.1 cm we now proceed to test the method in a 2-d hamiltonian system .we selected the hamiltonian used by caranicolas & vozikis ( 1987 ) , where the parameter is taken equal to 2.0 and the energy constant .again we study three trajectories , one ordered ( _ ord-2 _ ) starting at , one clearly stochastic ( _ ch-2 _ ) starting at and one `` sticky '' ( _ st-2 _ ) which starts at .all three trajectories have also and , while their is given by the energy integral . using the psod with a renormalization time - step equal to , we find that the spectra of the three test trajectories present the same properties as those of the corresponding cases of the mapping . in fig .[ 2d_a2_psod ] we see the psod of the three test trajectories .the difference between the spectrum of the chaotic trajectory _ch-2 _ and that of the ordered trajectory _ord-2 _ is again obvious .moreover , the `` sticky '' trajectory _st-2 _ has a spectrum similar to that of _ ch-2_. note that we have used only 2048 points ( corresponding to ) while , if we look at the evolution of the ( fig .[ 2d_a2_lcn ] ) , the trajectory looks ordered for a time up to .the fourth frame ( lower right ) in fig .[ 2d_a2_psod ] corresponds to another regular orbit ( _ ord-2a _ ) that starts at , i.e. very close to the sticky trajectory .we see that , although the two trajectories start very close to each other and , at least for the first 3000 times steps , span approximately the same phase - space region , their psods are completely different , clearly revealing the nature of each case .we decided to test also the case where the parameter is taken equal to .as caranicolas & vozikis ( 1987 ) have shown , the surface of section of this case has a completely different topology from that of the case .the equipotential curves have negative curvature along the lines .this affects mainly the loop orbits which appear `` squared '' .again we study four trajectories , one ordered ( _ ord-4 _ ) starting at , one clearly chaotic ( _ ch-4 _ ) starting at , one `` sticky '' ( _ st-4 _ ) which starts at and one ordered ( _ ord-4a _ ) starting at very near to the sticky one .the surface of section plot for the `` sticky '' trajectory is shown in fig .[ 2d_inv ] at various times .figures [ 2d_psod ] and [ 2d_psod_sort ] present the psod and the amplitude - sorted psod for these four orbits ( n=4096 ) .an important characteristic seen in these two figures is the presence of a high number of medium - amplitude frequencies in the spectra of the regular orbits .however , a distinction between regular and chaotic orbits can still be made .due to frequency overlapping , discussed in section [ section_noise ] , individual frequencies can not be distinguished at an amplitude level smaller than .this makes it very difficult to identify a sticky orbit with an amplitude level around .nevertheless the noise level of the regular orbits will be supressed if we take more points , while for a sticky chaotic orbit it will remain more or less the same .we apply our method to the model hamiltonian used by magnenat ( 1982 ) , contopoulos & barbanis ( 1989 ) , barbanis and contopoulos ( 1995 ) , barbanis ( 1996 ) , varvoglis et al .( 1997 ) , barbanis et .al . ( 1999 ) and tsiganis et al . (2000a ) where the parameters are taken as , , , and and the energy level is we again test three trajectories , an ordered one ( _ reg01 _ ) starting at , , a chaotic one ( _ ch-01 _ ) starting at , and a sticky one ( _ st-01 _ ) with initial conditions , , while , , were taken equal to 0 and is calculated from the energy integral .we use the variables , , instead of , , in order to be consistent with the previous publications .the barred variables are defined as , and . in 3-done can not visualize a surface of section plot , in order to check whether a particular trajectory is ordered or chaotic .therefore , if one is using the traditional tools , he has to rely on the calculation of lcns .it should be pointed out that a positive lcn is a proof that the trajectory under study is chaotic , while a monotonically decreasing value of is not a proof of order , since this behavior could very well originate from stickiness " .that is why we decided to test one more trajectory ( _ reg00 _ ) for which we can be almost certain that it is ordered , as it has the same initial position as _ reg01 _ but it belongs to an almost integrable case of the model hamiltonian , and .figure [ 3d1 ] shows the calculation of the for the four trajectories .we can clearly see that of _ st-01 _ is decreasing up to and then begins to saturate to a non - zero lcn value .the stochasticity is even more evident after , where we have a `` jump '' to a higher lcn value .0.1 cm we calculate the psod using , a value approximately equal to the time interval between two consecutive crossings of the plane by the trajectory .figure [ 3d2 ] shows the psod of the four trajectories using 512 points , i.e. for .note once again that we can decide that the `` sticky '' trajectory _st-01 _ is actually stochastic well in advance of the lcn method .the lcn shows the stochastic behaviour of the trajectory only after , while with the psod we need a modest . 0.1 cmas we already mentioned in the introduction , three of the most recent methods for distinguishing ordered from chaotic trajectories are the fast lyapunov indicators ( fli ) ( froeschl et al . 1997 ) , the `` spectra '' of stretching numbers and/or twist angles ( froeschl et al . 1993 , voglis & contopoulos 1994 , contopoulos & voglis 1997 ) and the `` spectral distance '' ( dsd ) ( voglis et al .1998 , 1999 ) . out of these three methodsthe fastest ones are the fli and the dsd methods . in this sectionwe compare the psod with the fli and the dsd methods by applying it to the test models presented in the above mentioned papers . a discussion concerning the spectra of stretching numbers follows . in the paper by froeschl et al( 1997 ) the authors tested the fli method on two trajectories of the standard map with ; one stochastic starting at , and one ordered starting at , .they found that for the stochastic trajectory the fli s drop very quickly down to ( fig . 2 in their paper - 200 iterations ) .on the contrary , the function levels only after about 10000 iterations .for the ordered trajectory the fli s are slowly decreasing , following . in fig .[ flg - psod ] we present the psod s of the stochastic ( left frame ) and the ordered ( right frame ) trajectory , calculated using only 256 iterations .it is obvious that the two spectra clearly differentiate between the two types of trajectory. 0.1 cm in a recent paper voglis et al .( 1999 ) proposed , as a tool for the distinction between chaotic and regular orbits in 4d maps , the use of the `` spectral distance '' .the method is based on the property that the `` spectrum '' of _ stretching numbers _ ( as well as that of the _ helicity angles _ ) of a chaotic trajectory is independent of the initial orientation of the deviation vector , while the spectrum of a regular trajectory is not .the `` spectral distance '' is a norm defined as ^ 2\ ] ] where the summation is for all s and , are two spectra of the same orbit but with two different initial deviation vectors .voglis et al .( 1999 ) applied their method to a 4-d mapping consisting of two coupled 2-d standard maps , i.e. where the s are defined in the interval [ 0,1 ) ( i.e. ) .we chose to test our method upon the two most interesting cases shown in voglis et al .( 1999 ) , namely trajectories a2 and a3 , in their notation .the a2 case has initial conditions ( )=(0.55,0.1,0.62,0.2 ) , and is a regular orbit , while the a3 case has the same initial s , and is a chaotic orbit but with a very low value of lcn ( around ) .fig.[map4d ] shows the psod of the two test trajectories .the left panel corresponds to the regular orbit ( a2 ) and the right panel corresponds to the chaotic one ( a3 ) .the spectra were calculated using 4096 iterations for each orbit .the distinction between the regular and the chaotic orbits is apparent .note that our method gave the same result as the method with almost the same computational effort .of course , both methods are much faster than the traditional lcn method .the `` spectrum '' of stretching numbers , , ( froeschl et al .1993 , voglis & contopoulos 1994 , contopoulos & voglis 1997 , dvorak et al .1998 ) is a method also based on the divergence of nearby trajectories .it consists of the calculation of the probability density of ( eq.([eq_qk ] ) ) , i.e. where is the number of with values between and . a quasi - periodic trajectory , which lies very close to a periodic trajectory, has a `` u '' shaped distribution of stretching numbers which is also symmetric around ( contopoulos et al .as we move away from the periodic trajectory , this symmetry is destroyed ( caranicolas & vozikis 1999 ) and the spectrum starts to develop a greater number of maxima . on the other handif the trajectory is chaotic , the spectra have different shapes and are not symmetric at all .it should be pointed out , however , that , in order to obtain a well defined spectrum , one needs to account for a large number of iterations ( typically or more ) .therefore , we did not attempt to compare this method to our own . in order to circumvent the problem of calculating a trajectory for long times , contopoulos and voglis ( 1997 )proposed the use of the average value of , if we keep n small , we can scan a wide area of initial conditions and map its dynamical behavior .trajectories in chaotic domains will have scattered around the value of the lcn of this domain , while trajectories in the ordered domain will have near zero .the method is very fast in distinguishing between ordered and chaotic domains .fig . 9 of contopoulos & voglis ( 1997 ) is a very good example of the results obtained by this method with only 10 iterations .however , although the method is very good in scanning wide areas of phase space for locating islands of order , it can not give reliable results with so few iterations for a particular trajectory .in the case of stochastic trajectories , for varies so much , that it may yield a number as small as the one given for ordered trajectories .the situation is even worse in the case of sticky trajectories ( i.e. in the borders of islands ) . in order to decide on the character of such trajectories one needs considerably longer calculations .froeschl & lega ( 1998 ) tested the fli method along with the method of twist angles ( contopoulos & voglis 1997 ) , the frequency map analysis method ( laskar et al . 1992 , laskar 1993 ) and the sup - map method ( laskar 1990 , froeschl & lega 1996 ) . figs .8a - d of their paper shows the results of the four methods on a cross section of 1000 trajectories near the hyperbolic point of the 1/6 resonance of the standard map ( eq .11 ) with . for the fli method they used 2000 iterations while for the other three methods 20000 iterations . in order to produce unambiguous results , the trajectories in this testare classified as ordered or chaotic by an appropriately selected number / indicator . for the fli methodthe authors have used as an indicator the time necessary for the fli to reach a value lower than . in order to compare our method with these resultswe need also an one - number indicator derived from the psod . as suchwe have selected here the average value of .the averaging is performed not over all values but by ignoring the highest 1/6th and the lowest 1/6th of the amplitudes , the former being probably due to periodicities in sticky trajectories and the latter probably coming from numerical errors in the integration and the calculation of the power spectrum .[ helios ] shows the same cross section with fig .8a - d of froeschl & lega ( 1998 ) , using the above indicator taken after 8192 iterations .as we can see it gives essentially the same information as the other four methods .note that the y - axis in fig .[ helios ] is inverted for easier comparison with fig .8a - d of froeschl & lega ( 1998 ) .0.1 cm 0.1 cm as an application of our method to a problem of physical importance , we shall use it in order to assess the chaotic or not nature of asteroidal trajectories .we use a simplified model of the solar system , namely the planar restricted three body problem where the sun and jupiter move on elliptic trajectories around their center of mass and an asteroid of infinitesimal mass moves in the gravitational field of the two bodies .we calculate the psod and the lcn using as renormalization time , , the period of jupiter , i.e. years .[ 3bp - lcn ] shows the of the four trajectories tested .the upper left frame belongs to an ordered trajectory starting with semi - major axis and eccentricity , the upper right to a stochastic trajectory with initial elements , .the other two frames correspond to trajectories with initial , ( lower left ) and , ( lower right ) representing clones " of the asteroid 522 - helga , which is a well - known example of `` stable chaos '' ( milani & nobili 1992 ) . the psod of the four trajectories ( n=512 ) is shown in fig .[ 3bp - psod ] .as we can see , a few jovian periods are sufficient to decide if the motion of a particular asteroid is stochastic or ordered . in the case of the helga clone with ( lower left of fig .[ 3bp - psod ] ) the psod shows clearly a chaotic nature after only 512 jovian periods i.e. 6072 years , while even a rough calculation of the lcn needs at least years ( see lower right frame of fig .[ 3bp - lcn ] ) .the psod of the helga clone is rather peculiar .although it differs from that of the ordered orbit ( upper left ) , it is not clearly chaotic , unlike the psod of the other helga clone ( lower left ) .nevertheless , if we take more points in our time series the chaotic nature of the orbit becomes apparent , as we can see in fig .[ 3bp - psod2 ] .however , the spectrum can still be described as having a strong quasi - periodic component , something which is related to the peculiar dynamical nature of this orbit as discussed in tsiganis et al .( 2000b ) .in the present paper we propose an alternative tool , which we call psod , for the characterization of the chaotic or not nature of trajectories in conservative dynamical systems .the method is based on the frequency analysis of a time series , constructed by successive records of the amplitude of the deviation vector of nearby trajectories .as discussed in section 2 , such a mixed " method is expected to have certain advantages .the reason is that the power spectrum of such a time series will contain all the characteristic frequencies of the motion in a properly weighted " ratio . * for ordered trajectories the spetrum possesses only a few high - amplitude peaks, the exact number of which depends not only on the system but also on the particular orbit .of course a small amplitude noise level , due to the numerical procedure , is superimposed on the spectrum , which diminishes as the length of the time series is increased ; * for chaotic trajectories the spectrum has a noisy pattern . for weakly chaotic orbitsa few high - amplitude peaks are also present . increasing the length of the time series, the spectrum tends to a white noise spectrum which remains practically unchanged for any ( large enough ) number of points ( see figs . 5 and 6 ) .like most methods existing in the literature , it seems that the method performs better for maps than for flows .however , the results found for the three hamiltonian flows tested ( including the three - boody problem ) show that the method can be applied to any system , no matter how many degrees of freedom are involved .we believe that the results may be significantly improved for flows , provided that proper analysis concerning the renormalization time is conducted .the difference between maps and flows is that , in the former case , isochronous records of dynamical quantities also mean studying the system on a surface of section .this is not the case for flows and , thus , a more refined analysis on how to select a proper renormalization time has to be made .we have shown that the sensitivity of the psod in testing single trajectories in maps is comparable to the fli and the dsd methods .this also makes the psod an efficient tool for scanning wide areas of the phase space ( see section 4 ) . for such purposes one would like to have an one - number indicator for measuring chaos .the only uniquely defined measure of chaos is of course the lcn .any other indicator should be in a one - to - one correspondence with the lcn , in order to give the same information .there is no guarantee that such an indicator can be based on the frequency content of the psod , as chaotic orbits with similar lcns may have a different frequency distribution .further analysis of the properties of the psod has to be performed in order to decide whether such an indicator can be found .barbanis , b. , 1996 , in : proceedings of the 2nd hellenic astronomical conference , contadakis m.e ., hadjidemetriou j.d . , mavridis l.n ., seiradakis j.h .( eds . ) , p. ziti & co , thessaloniki , p. 520- 525 barbanis , b. , contopoulos , g. , 1995 , a&a , 294 , 33 barbanis , b. , varvoglis , h. , vozikis , ch . , 1999 ,a&a , 344 , 879 benettin , g. , galgani , l. , strelcyn j.m . , 1976 ,a , 14 , 2338 caranicolas , n. , vozikis , ch . , 1987 , cel .mech . , 40 , 35 caranicolas , n. , vozikis , ch . , 1999 ,a&a , 349 , 70 contopoulos , g. , 1966 , in : les nouvelles mthodes de la dynamique stellaire , nahon , f. & hnon , m. ( eds . ) contopoulos , g. , barbanis , b. , 1989 , a&a , 222 , 329 contopoulos , g. , voglis , n. , 1997 , a&a , 317 , 73 contopoulos , g. , voglis , n. , efthymiopoulos , c. , froeschl , c. , gonczi , r. , lega , e. , dvorak , r. , lohinger , e. , 1997 , cel ., 67 , 293 dvorak , r. , contopoulos , g. , efthymiopoulos , ch . , voglis , n. , 1998 , planet .space sci ., 46 , 1567 froeschl , c. , 1984 , cel .mech . , 34 , 95 froeschl , c. , lega , e. , 1996 , cel .astron . , 64 , 21 froeschl , c. , lega , e. , 1998 , a&a , 334 , 355 froeschl , c. , froeschl , ch . , lohinger , e. , 1993 , cel ., 51 , 135 froeschl , c. , lega , e. and gonzi , r. , 1997 , cel .astron . , 67 , 41 hnon m. and heiles c. , 1964 , astron .j. , 69 , 73 laskar , j. , 1990 , icarus , 88 , 266 laskar , j. , 1993 , physica d , 67 , 257 laskar , j. , froeschl , c. , celleti , a. , 1992 , physica d , 56 , 253 lega , e. , froeschl , c. , 1996 , physica d , 95 , 97 lohinger , e. and froeschl c. , 1993 , cel .astron . , 57 , 369 magnenat , p. , 1982, cel.mech . , 28 , 319 milani a. and nobili a. , 1992 , nature , 357 , 569 press w. h. , teukolsky s. a. , vetterling w. t. and flannery b. p. , 1992, _ numerical recipes in fortran the art of scientific computing _2nd edn ( cambridge : cambridge university press ) tsiganis k. , anastasiadis a. and varvoglis h. , 2000a , chaos solit . & fract .( in press ) tsiganis k. , varvoglis h. and hadjidemetriou j.d . , 2000b , icarus ( in press ) varvoglis , h. , vozikis , ch . ,barbanis , b. 1997 , in : the dynamical behaviour of our planetary system , henrard j. , dvorak r. ( eds . ) , kluwer , dordrecht voyatzis , g. , ichtiaroglou , s. , 1992 , j. phys .a , 25 , 5931 voglis , n. , contopoulos , g. , 1994 , j. phys .a , 27 , 4899 voglis , n. , contopoulos , g. , efthymiopoulos , c. , 1998 , phys .e , 57 , 372 voglis , n. , contopoulos , g. , efthymiopoulos , c. , 1999 , cel .astron . , 73 , 211 voglis , n. , efthymiopoulos , c. , 1998 , j. phys . , a31 , 2913
|
we propose a new method for determining the stochastic or ordered nature of trajectories in non - integrable hamiltonian dynamical systems . the method consists of constructing a time - series from the divergence of nearby trajectories and then performing a power spectrum analysis of the series . ordered trajectories present a spectrum that consists of a few spikes while the spectrum of stochastic trajectories is continuous . a test of the method with three different systems , a 2-d mapping as well as a 2-d and a 3-d hamiltonian , shows that the method is fast and efficient , even in the case of sticky trajectories . the method is also applied to the motion of asteroids in the solar system .
|
in the last two decades significant advances have taken place in the realm of computational scattering with notable theoretical as well as practical contributions in the domains of finite elements and integral equations . however , simulation strategies based upon the former are usually restricted to low and mid frequency applications .indeed , use of finite element methods in exterior scattering simulations requires not only utilization of an artificial interface to truncate the infinite computational domain but also introduction of appropriate absorbing boundary conditions on this interface to effectively replicate the behaviour of solution at infinity .this , in return , renders finite - element methods impractical in high - frequency applications and may result in a loss of accuracy and increased computational cost .moreover , this difficulty is further amplified on models involving multiple scatterers , such as the one treated in the present paper , because the distance that separates the obstacles naturally increases the size of the truncated domain .integral equation methods , in contrast , are more adequate for these situations since , on the one hand , they explicitly enforce the radiation condition by simply choosing an appropriate _ outgoing _ fundamental solution and , on the other hand , they are solely based on the knowledge of solution confined only to the scatterers which , in surface scattering applications , provides a dimensional reduction in computational domain . nevertheless , they deliver dense linear systems whose sizes increase in proportion to with increasing wavenumber where is the dimension of the computational manifold .broadly speaking , the success of integral equation approaches in high - frequency simulations is directly linked with the incorporation of asymptotic characteristics of the unknown into the solution strategy .this is essentially the path we follow in this manuscript , since it transforms the problem into the determination of a new unknown whose oscillations are virtually independent of frequency . while pioneering work in this direction is due to nedlc et al . who , in two - dimensional simulations , have provided a reduction from to in the number of degrees of freedom needed to obtain a prescribed accuracy , the single - scattering algorithm of bruno et al . ( based on a combination of nystr " om method , extensions of the method of stationary phase , and a change of variables around the shadow boundaries ) has had a significant impact as it has demonstrated the possibility of solution of surface scattering problems ( see for a three - dimensional variant ) .alternative implementations of this approach built on a collocation and geometrical theory of diffraction combo , a collocation and steepest descent amalgamation , and a p - version galerkin interpretation have later appeared . in this lattersetting , ecevit et al . have recently developed a rigorous method which demands , for any convex scatterer , an increase of ( for any ) in the number of degrees of freedom to maintain a prescribed accuracy independent of frequency .the single - scattering algorithm has been successfully extended by bruno et al . to encompass the high - frequency multiple - scattering problems considered in this paper , relating specifically to a finite collection of convex obstacles .roughly speaking , the approach in was based on : 1 ) representation of the overall solution as an infinite superposition of single scattering effects through use of a neumann series , 2 ) determination of the phase associated with each one of these effects using a spectral geometrical optics solver , and 3 ) utilization of the high - frequency single scattering algorithm for the frequency independent evaluation of these effects . while every numerical implementation in has displayed the spectral convergence of neumann series for two convex obstacles , unfortunately , a rigorous proof of this fact was not available .indeed , we have later shown for several convex obstacles in both two- and three - dimensional settings that the neumann series can be rearranged into contributions associated with _ primitive periodic orbits _ and an explicit rate of convergence formula can be rigorously derived on each periodic orbit in the high - frequency regime . while , on the one hand , these analyses depict the convergence of neumann series for all sufficiently large wavenumbers , on the other hand , the rate of convergence formulas display that convergence can be rather slow particularly when ( at least ) one pair of nearby obstacles exists .this analysis of the rate convergence was performed by using _double layer _ potentials . in this work ,we show that use of _ combined field integral equations _lead to the same rate of convergence . accordingly , novel mechanisms are much needed for the accelerated solution of multiple scattering problems that retain the frequency independent operation count underlying the algorithm in . however , this is a rather challenging task since the algorithm in undeviatingly rests on reducing the problem , at each iteration , to the computation of an unknown with a single - valued phase , and thus any strategy aimed at accelerating the convergence of neumann series must also preserve the phase information related with the iterates . in this paper , we develop a krylov subspace method that significantly accelerates the convergence of neumann series , in particular in the case where the distance between obstacles decreases hence deteriorating the rate of convergence .this method is well adapted to the high frequency aspect of the present problem as it retains the phase information associated with the iterates and delivers highly accurate solutions in a small number of iterations .note specifically that a direct implementation of krylov subspace methods inhibits the use of the algorithm in as this makes it impossible to track the phase information of the corresponding iterates .as we shall see , a natural attempt to overcome this issue would be to simply use the binomial formula , however , this disrupts the convergence of the method as displayed in the numerical results .we defeat this additional difficulty by introducing an alternative numerically stable decomposition of the iterates . in summary ,our approach is based on three main elements : 1 ) utilization of an appropriate formulation of the multiple scattering problem in the form of an operator equation of the second kind , 2 ) alternative representation of the associated krylov subspaces so as to guarantee that basis elements are single - phased and thus retain the frequency independent operation count underlying the algorithm in , and 3 ) a novel decomposition of the iterates entering in a ( standard ) krylov recursion to prevent instabilities that would otherwise arise in a typical implementation based on binomial identity .indeed , as depicted in our numerical implementations , the resulting methodology is immune to numerical instabilities as it removes the additive cancellations arising from a direct use of binomial theorem .moreover , it provides additional savings in the number of needed iterations when compared with the classical pad approximants used in .we additionally complement our krylov subspace approach utilizing a preconditioner based upon kirchhoff approximations to further reduce the number of iterations needed to obtain a given accuracy .indeed , since the knowledge of the illuminated regions at each iteration are readily available through the geometrical optics solver we have used to precompute the phase of multiple scattering iterations , essentially the only additional computation needed for the application of this preconditioner is the use of stationary phase method to deal with non - singular integrals wherein the only stationary points are the target ones .this kind of _ dynamical _ preconditioning is unusual and its originality resides in the fact that the location of illuminated regions varies at each reflection .this clearly distinguishes our preconditioning strategy from classical approaches where the preconditioners are usually _ steady _ by design .while the success of this kirchhoff preconditioner is clearly displayed in our numerical tests , the utilization of kirchhoff approximations for the multiple scattering iterations naturally arises the question of convergence of the associated neumann series .we address this problem by showing that this series converges for each member of a general general class of functions , and explain the exact sense in which the spectral radius of the kirchhoff operator is strictly less than 1 .the importance of this result is twofold .first , it verifies that the multiple scattering problem can be solved by using solely the kirchhoff technique , and further it rigorously answers the validity of our preconditioning strategy .the rest of the paper is organized as follows . in [ sec : formulation ] , we introduce the scattering problem and provide a comparison of the equivalent differential and integral equation formulations of multiple scattering problems . [ sec : convergence ] is reserved for a comparison of convergence characteristics of these approaches . in [ sec : freqindep ] , we provide a short review of the algorithm in as the ideas therein lie at the core of frequency independent evaluation of multiple scattering iterations as well as the iterates associated with our newly proposed krylov subspace method detailed in [ sec : krylov ] . in [ sec : kirchhoff ] , we explain how this krylov subspace approach can be preconditioned while utilizing kirchhoff approximations .finally , in [ sec : numerical ] , we present numerical implementations validating our newly proposed methodologies .given an incident field satisfying the helmholtz equation in ( ) , we consider the solution of sound - soft scattering problem in the exterior of a smooth compact obstacle .potential theoretical considerations entail that the _ scattered field _ satisfying admits the single - layer representation where is the unknown _ normal derivative of the total field _ ( called the _ surface current _ in electromegnatics ) , is the exterior unit normal to , is the fundamental solution of the helmholtz equation , and is the hankel function of the first kind and order zero .although can be recovered through a variety of integral equations , we use the uniquely solvable _ combined field integral equation _( cfie ) where and . in casethe obstacle consists of finitely many disjoint sub - scatterers , denoting the restrictions of and to by and so that equation gives rise to the coupled system of integral equations where in connection with the operator , the following result will be useful in extending our two - dimensional results in concerning the convergence of multiple scattering iterations to the case of cfie .[ thm : diagonal ] for each , the diagonal operator is continuous with a continuous inverse .moreover , if each is star - like with respect to a point in its interior , then given there exists a constant such that for all .this is immediate since is a diagonal operator and , as shown in ( * ? ? ?* theorem 4.3 ) , each operator on its diagonal satisfies inequality .multiplying equation with the inverse of yields the equivalent operator equation of the second kind where and with . under suitable restrictions on the geometry of scatterers, the solution of the operator equation is given by the neumann series where the _ multiple scattering iterations _ are defined by as was presented in , the multiple scattering problem described above possesses an equivalent differential equation formulation .naturally , the convergence analysis carried out in is directly linked with that of the neumann series and here we present the exact connection .indeed , the fields given by the single - layer potentials in connection with the components of in correspond precisely to the unique solutions of the exterior sound - soft scattering problems and they provide the decomposition of the scattered field as on the other hand , the iterated fields given by the single - layer potentials in relation with the components of in are precisely the unique solutions of the exterior sound - soft scattering problems with and thus , in case the neumann series converges , each solution can be expressed as the superposition work on the justification of identity in a three - dimensional setting has appeared in . indeed ,while ( * ? ? ?* theorem 1 ) establishes uniqueness of decomposition , ( * ? ? ?* theorem 3 ) justifies the convergence of the series in under suitable restrictions on the geometry of the obstacles as stated in the next theorem .[ thm : balabane ] assume that and , for , the obstacle is non - trapping in the sense that let then there exists a constant that depends on such that if then , for , identity holds in the sense of convergence in .as is clear , theorem [ thm : balabane ] establishes convergence of the series for non - trapping obstacles only if the wavenumber is sufficiently small .work on rigorous justification of the convergence of neumann series ( and thus of identity ) in high - frequency applications , on the other hand , reduces to our work and that relates to a finite collection of smooth strictly convex ( and thus non - trapping in the sense of theorem [ thm : balabane ] ) obstacles in two- and three - dimensions , respectively . indeed , as we have shown in , the neumann series can be rearranged into a sum over _ primitive periodic orbits _ and a precise ( asymptotically geometric ) rate of convergence ( where is the period of the orbit ) , that depends only on the relative geometry of the obstacles , can be derived on each periodic orbit in the asymptotic limit as . to review these results , for the sake of simplicity of exposition, we assume that the scatterer consists only of two smooth strictly convex obstacles and in which case there are only two ( primitive ) periodic orbits ( initiating from each and traversing the obstacles in a 2-periodic manner ) and relation is equivalent to where and , for , in connection with identity , theorem [ thm : diagonal ] implies in two - dimensional configurations that , given any , there exists such that for any holds for any constant , and thus the aforementioned _ geometric rate of convergence _ of the neumann series is directly linked with that of the right - hand sides .indeed , assuming that the incidence is a plane - wave with direction ( ) with respect to which the obstacles and satisfy the _ no - occlusion condition _ ( which amounts to requiring that there is at least one ray with direction that passes between and without touching them ) , denoting by the uniquely determined points minimizing the distance between and , and setting , we have the following relation among the leading terms in the asymptotic expansions of which extends our analyses in to the case of cfie .[ thm:2per ] there exist constants , , and with the property that , for all , the constant is given in two - dimensional configurations by ^{-1}}\ \right ] \right)^{-1}\ ] ] where is the curvature at the point ; and in three - dimensional configurations } \times \det \left [ i + \sqrt{i - \left [ t \left ( i + d \kappa_{1 } \right ) t^{-1 } \left ( i + d \kappa_{2 } \right ) \right]^{-1}}\ \right ] \right)^{-1}\ ] ] where is the identity matrix , is the matrix of principal curvatures at the point , and is the rotation matrix determined by the relative orientation of the surfaces at the points .assume first that the dimension is .writing ^t ] , it suffices to show that , for , for some constant . on the other hand , ( ** theorems 3.4 and 4.1 ) display that ( a more general version of ) this estimate holds on any compact subset of the _ illuminated regions _( see the next section for a precise definition of these regions ) when and are replaced by the leading terms and in the asymptotic expansions of and . finally applying the stationary phase lemma to each component of identity , the same techniques used to prove ( * ? ? ?* theorem 4.1 ) delivers estimate . in case ,the argument is the same and is based upon ( * ? ? ?* theorems 3.3 and 4.3 ) .although theorem [ thm:2per ] is valid under the no occlusion condition , extensive numerical tests in display that the conclusion of theorem [ thm:2per ] is valid not only when this condition is violated but also when the convexity assumption is conveniently relaxed . in light of estimates - , for , we have which signifies that the neumann series converges with the geometric rate .note however that , as the distance between the obstacles and decreases to zero , increases to , and thus convergence of the neumann series significantly deteriorates .the same remark is valid when the configuration consists of more than two subscaterers and involves at least one pair of nearby obstacles . indeed , as we have shown in , this is also completely transparent from a theoretical point of view since , in this case , the neumann series can be completely dismantled into single - scattering effects and rearranged into a sum over _ primitive periodic orbits _ including , in particular , -periodic orbits .the next section is devoted to the description of how we adopt the high - frequency integral equation method in to the evaluation of iterates arising in our krylov subspace approach and also in its preconditioning through kirchhoff approximations .as explained in the introduction , the strength of the work in is due to retaining information on the phases of multiple scattering iterations , and therefore our krylov subspace and kirchhoff preconditioning strategies are also designed to posses the same property .for simplicity of exposition we continue to assume that the obstacle consists only of two disjoint sub - scatterers and . in what follows , for , we will always assume that .in this case , relation can be written , for , in components as and for . as identity displays, is exactly the surface current generated by the incidence on ignoring interactions between and .similarly , for , equation depicts that is precisely the surface current generated by the field ( note that ) acting as an incidence on ignoring , again , interactions between and .therefore identities and entail that the neumann series completely dismantles the single scattering contributions and allows for a representation of the surface current as a superposition of these effects .more importantly , in geometrically relevant configurations , these observations allow us to predetermine the phase of and express it as the product of a highly oscillating complex exponential modulated by a slowly varying amplitude in the form and this , in turn , grants the frequency - independent solution of equations - as described in . to review the algorithm in and set the stage in the rest of the paper, we first describe the phase functions in combination with the various regions they determine on the boundary of the scatterers , and we present one of the main results in that displays the asymptotic characteristics of the amplitudes .indeed , in case the obstacles and are convex and satisfy the no occlusion condition with respect to the direction of incidence , the phase in is given by here , for any of the two _ obstacle paths _ and defined by for all , the geometrical phase at any point ( ) is uniquely defined as where the points are specified by for .these conditions simply mean the phase is determined by the ray with initial direction sequentially hitting at and bouncing off the points ( ) according to the law of reflection to finally arrive at . moreover , these rays divide into two open connected subsets , namely the _ illuminated regions _ and the _ shadow regions _ and their closures intersect at the _ shadow boundaries _ each of which consists of two points in two - dimensional configurations or a smooth closed curve in three - dimensions . in connection with the phase functions , illuminated regions , shadow regions , andthe shadow boundaries are then given by generally speaking this means that the rays emanating from return to after an even number of reflections , and those initiating from arrive after an odd number of reflections .finally let us note that the phase functions are smooth and periodic as they are confined to the boundary of the associated scatterers .the computation of these phases are performed using a spectrally accurate geometrical optics solver .this also allows for a simple and accurate determination of the shadow boundary points and thus the illuminated and shadow regions . with these definitions we can now state one of the main results in that completely clarifies the asymptotic behavior of amplitudes in .[ thm : hormander ] * on the illuminated region , belongs to the hrmander class ( cf . ) and admits the asymptotic expansion where are complex - valued functions .consequently , for any , the difference belongs to and thus satisfies the estimates on any compact subset of for any multi - index and .* over the entire boundary , belongs to the hrmander class and admits the asymptotic expansion where are complex - valued functions , is a real - valued function that is positive on , negative on , and vanishes precisely to first order on , and the function admits the asymptotic expansion and it is rapidly decreasing in the sense of schwartz as . note specifically then , for any , the difference belongs to , , and thus satisfies the estimates for any multi - index and .the first main ingredient underlying the algorithm in was the observation that , while admits a classical asymptotic expansion in the illuminated region as displayed by equation , it possesses boundary layers of order around the shadow boundaries and rapidly decays in the shadow region as implied by the expansion and the mentioned change in the asymptotic expansions of the function .therefore , as depicted in , utilizing a cubic root change of variables in around the shadow boundaries , the unknown can be expresses in a number of degrees of freedom independent of frequency , and this transforms the problem into the evaluation of highly oscillatory integrals .indeed , a second main element of the algorithm in is based on the realization that the identity combined with the asymptotic expansions of hankel functions entails and thus , in light of factorization , equations - take on the form and , for , where as depicted in , frequency independent evaluations of integrals in - can then be accomplished to any desired accuracy utilizing a _ localized integration _( around stationary points of the combined phase and/or the singularities of the integrand ) procedure based upon suitable extensions of the method of stationary phase .the third main element of the algorithm in is the use of nysrm and trapezoidal discretizations and fourier interpolations to render the method high order , and the scheme is finally completed with a matrix - free krylov subspace linear algebra solver to obtain accelerated solutions .while the above discussion provides a brief summary of the algorithm in , it clearly signifies the importance of retaining the phase information in connection with the multiple scattering iterations since this allows for a simple utilization of the aforementioned localized integration scheme .accordingly , any strategy aiming at accelerating the convergence of neumann series must also preserve the phase information . as we explain ,both the novel krylov subspace method we develop in the next section and its preconditioning discussed in section [ sec : kirchhoff ] posses this property .as with the solution of matrix equations , krylov subspace methods provide a convenient mechanism for the approximate solution of operator equations in hilbert spaces ( see e.g. and the references therein ) .these methods are _ orthogonal projection methods _ wherein , given an initial approximation to , one seeks an approximate solution from the affine space related with the _ krylov subspace _ of the operator associated with the _ residual _ imposing the _ petrov - galerkin condition _ in connection with the operator equation , taking , the approximate solution belongs to the krylov subspace for which , in light of identity , the functions can be expressed as linear combinations of the multiple scattering iterations through use of the binomial theorem as this relation clearly entails and thus , any information about the krylov subspace can be obtained in frequency independent computational times using the algorithm briefly described in 4 .a particular krylov subspace method we favor for the solution of multiple scattering problem is the classical orthodir iteration which , for the initial guess , takes on the form this iteration entails , through a straightforward induction argument , the following recurrence relation for where is the iteration operator specified by equation . for , the iterates generated by the orthodir algorithm satisfy the recurrence relation although this relation can be used in combination with the binomial identity to recursively compute , this approach is bound to result in numerical instabilities when the distance between the obstacles and is close to zero since , in this case , the asymptotic rate of convergence is close to .concentrating for instance on the term , this instability is apparent from the subtractive cancellations in binomial identity upon noting that and for by inequality and theorem [ thm:2per ] . on the other hand ,since , a combined use of and clearly shows that the iterates generated by the orthodir algorithms can alternatively be computed through the following _ identification procedure_. each is a linear combination of , say this allows for the computation of the next iterate as where the new coefficients are easily computed by identification .note specifically that , since the phases of are known , identity allows for a utilization of the _ localized integration _scheme briefly summarized in 4 in the evaluation of inner products in steps 2.1 and 2.4 in the orthodir iteration . on the other hand ,the identification procedure provides a _ numerically stable _ way of recursively computing as it clearly eliminates subtractive cancellations arising from the use of binomial identity .although the novel krylov subspace approach discussed in the previous section provides an effective mechanism for the accelerated solution of multiple scattering problem , this can be further improved if the operator equation is properly preconditioned .indeed , for an appropriately defined operator approximating the iteration operator , the preconditioned form of equation reads in this connection , we note the following useful alternative .[ thm : altpre ] if the spectral radius of is strictly less than , then the preconditioned equation can be written alternatively as since , we have the neumann series representation use of in the identity delivers the desired result .it is therefore natural to approximate the solution of with the solution of the truncated equation which we shall write as while equation displays the preconditioning strategy we shall utilize for the solution of multiple scattering problem , it is clearly amenable to a treatment by the krylov subspace method developed in the preceding section to further accelerate the solution of problem . as for the requirement that has to approximate the iteration operator , we recall that each application of corresponds exactly to the evaluation of the surface current on each of the obstacles and generated by the fields scattered from , respectively , and at the previous reflection as depicted by the identity it is therefore reasonable to define the operator in the form and require that . accordingly , the operators must retain the phase information to preserve the frequency independent operation count while , concurrently , providing a reasonable approximation to the slow densities to guarantee an accurate preconditioning .this requirement can be satisfied only if the operators are defined in a _dynamical _ manner so as to respect the information associated with the iterates , and this distinguishes our preconditioning strategy from classical approaches where the preconditioners are _ steady _ by design .the most natural approach is to design the operators so that they yield the classical _ kirchhoff approximations _ as these preserve the phase information exactly and approximate with the leading term in its asymptotic expansion .concentrating on two - dimensional settings , in this connection , a basic relation we exploited in was the observation that while , on the one hand , this term coincides with that of twice the normal derivative of in on the illuminated region , and on the other hand , identity combined with asymptotic expansions of hankel functions entails so that use of in yields where as for the oscillatory integral in , as we have shown in , it is treatable through an appropriate use of _ stationary phase method _ which states that the main contribution to an oscillatory integral comes from the stationary points of the phase .let ] .suppose that is the only stationary point of in , , and .then there exists a constant such that , for all , }.\ ] ] indeed , it turns out that the combined phase function has two stationary points , one in the shadow region with a contribution of ( for all ) due to rapid decay of the amplitude , and another one in the illuminated region given by ( at which the combined phase has a positive `` second derivative '' ) whose contribution agrees , to leading order , with that given by stationary phase evaluation of the integral in .while this discussion clarifies how _ kirchhoff operators _ must be designed so that they yield the leading terms in the asymptotic expansions of on the illuminated regions at each iteration , the rapid decay of in the shadow region , in turn , provides the motivation that must simply approximate by zero in these regions . being aware of these, we use to denote the arc length parametrezation of ( in the counterclockwise orientation ) with period ( ) so that , for each , is the unique point in with , and define the _ kirchhoff operators _ as follows . for a smooth _ phase _ having the property that , for each , the function given by has a unique stationary point such that \cap \partial \omega_{j ' } = y_{j'} ] and using ( ) to denote generic functions defined on which may be different from line to line , we thus see through equations - that is of the form more generally , we have the following result . for ,the orthodir iterates are of the form this follows by a straightforward induction based on equations - , and the recursion the main point behind this theorem is that use of in clearly allows for an application of the aforementioned localized integration scheme in connection with the execution of the operator in .moreover , it is clear that each realization of the kirchhoff operator is frequency independent .consequently , the preconditioned equation is amenable to a treatment by the krylov subspace method described in 5 to obtain even more accelerated solutions of the multiple scattering problem while still retaining the frequency independent operation count if desired .here we present numerical examples that display the benefits of our krylov subspace approach as well as its preconditioning through use of kirchhoff approximations . to this end ,we have designed two different test configurations ( see fig .[ fig : configurations ] ) .first we have considered two circles illuminated by a plane - wave incidence coming in from the left with wavenumber . while the radii of the circles are and , they are centered at the origin and respectively .second we have treated a configuration consisting of two parallel elliptical obstacles with centers at and , and major / minor axes and . the illumination is provided by a plane wave with direction along the major axes and wavenumber . figure [ fig : krylov ] provides a comparison of ( a ) the neumann series , ( b ) the pad approximants , ( c ) the krylov subspace method based on a combined use of binomial formula and identity , and ( d ) the alternative implementation of the latter based on decomposition leading to equation . more precisely , figure [ fig : krylov ] depicts the number of reflections versus the logarithmic error between the exact solution and the approximations obtained by the four aforementioned schemes . in both cases ,the reference solution is computed using an integral solver with sufficiently many disretization points to guarantee digits of accuracy .as we anticipated , combined use of binomial formula and identity suffers from subtractive cancellations and fails to approximate the solution as the number of reflections increases .the implementation of krylov subspace method based on decomposition and resulting equation clearly resolves this issue .furthermore , when compared with the pad approximants considered in , approximations provided by this alternative implementation of the krylov subspace method are more stable and give better accuracy at each iteration .incidentally , note specifically that a direct use of neumann series would require about iterations to obtain digits of accuracy for circular / elliptical configurations in figure [ fig : configurations ] , and thus our krylov subspace approach provides savings of in the required number of reflections .finally , in figure [ fig : kirchhoff ] , we display a comparison of ( a ) the neumann series , ( b ) the stable implementation of our krylov subspace approach based on decomposition and equation , and ( c ) kirchhoff preconditioning of the latter .note precisely that ( c ) is based on the krylov subspace iterations ( described in [ sec : krylov ] ) applied to the truncated version of preconditioned form of the multiple scattering problem utilizing the kirchhoff operator . in our implementationswe have taken in equation and used for the circular / elliptical configurations in figure [ fig : configurations ] .as depicted in figure [ fig : kirchhoff ] , in both cases only three orthodir iterations are sufficient to obtain of accuracy which would require 20/100 iterations if neumann series is directly used .the fact that the error does not attain the machine precision is due to the truncation of the series used to compute the preconditioner ( ) .obviously inclusion of more terms yields better accuracy but at the expense of slightly more expansive numerics .we have developed an acceleration strategy for the solution of multiple scattering problems based on a novel and effective use of krylov subspace methods that retains the phase information and provides significant savings in computational times .further , we have coupled this approach with an original preconditioning strategy based upon kirchhoff approximations that greatly reduces the number of iterations needed to obtain a prescribed accuracy . in the forthcoming work, we will extend this numerical algorithm for configurations of more than two obstacles .indeed , our new krylov method can be easily applied to this kind of configurations without adding any additional computational cost . on the other hand , although the kirchhoff preconditioner greatly enhances the convergence of the krylov subspace method for two obstacles , its utilization for several obstacles requires some numerical optimization .y. boubendir gratefully acknowledges support from nsf through grant no .dms-1319720 .abboud , t. , ndlec , j .- c ., zhou , b. : improvement of the integral equation method for high frequency problems , in mathematical and numerical aspects of wave propagation : mandelieu - la napoule , siam , ( 1995 ) , 178187 .anand , a. , boubendir , y. , ecevit , f. , reitich , f. : analysis of multiple scattering iterations for high - frequency scattering problems .ii . the three - dimensional scalar case , numer . math .* 114*(3 ) ( 2010 ) , 373427 .antoine , x. : advances in the on - surface radiation condition method : theory , numerics and applications , in f. magoules ( ed . ) , comput .meth . for acoustics problems , saxe - coburg publ .stirlingshire , uk ( 2008 ) , 207232 .bruno , o.p . ,geuzaine , c.a . ,monroe , j.a ., reitich f. : prescribed error tolerances within fixed computational times for scattering problems of arbitrarily high frequency : the convex case , phil . trans .london * 362 * ( 2004 ) , 629645 .davies , r. w. , morgan , k. , hassan , o. : a high order hybrid finite element method applied to the solution of electromagnetic wave scattering problems in the time domain , comput .mech . * 44*(3 ) ( 2009 ) , 321331 .tong , m.s . , chew , w.c .: multilevel fast multipole acceleration in the nystrm discretization of surface electromagnetic integral equations for composite objects , ieee trans .antennas and propagation * 58*(10 ) ( 2010 ) , 34113416 .
|
high frequency integral equation methodologies display the capability of reproducing single - scattering returns in frequency - independent computational times and employ a neumann series formulation to handle multiple - scattering effects . this requires the solution of an enormously large number of single - scattering problems to attain a reasonable numerical accuracy in geometrically challenging configurations . here we propose a novel and effective krylov subspace method suitable for the use of high frequency integral equation techniques and significantly accelerates the convergence of neumann series . we additionally complement this strategy utilizing a preconditioner based upon kirchhoff approximations that provides a further reduction in the overall computational cost .
|
we consider the problem of distributed consensus that has become recently very interesting especially in the context of ad hoc sensor networks . in particular , the problem of distributed average consensus has attracted a lot of research efforts due to its numerous applications in diverse areas .a few examples include distributed estimation , distributed compression , coordination of networks of autonomous agents and computation of averages and least - squares in a distributed fashion ( see e.g. , and references therein ) . in general the main goal of distributed consensusis to reach a global solution using only local computation and communication while staying robust to changes in the network topology . given the initial values at the sensors , the problem of distributed averaging is to compute their average _ at each sensor _ using distributed linear iterations .each distributed iteration involves local communication among the sensors . in particular, each sensor updates its own local estimate of the average by a weighted linear combination of the corresponding estimates of its neighbors .the weights that are represented in a network weight matrix typically drive the importance of the measurements of the different neighbors .one of the important characteristics of the distributed consensus algorithms is the rate of convergence to the asymptotic solution . in many cases, the average consensus solution can be reached by successive multiplications of with the vector of initial sensor values .furthermore , it has been shown in that in the case of fixed network topology , the convergence rate depends on the second largest eigenvalue of , .in particular , the convergence is faster when the value of is small .similar convergence results have been proposed recently in the case of random network topology , where the convergence rate is governed by the expected value of the , ] the vector of initial values on the network .denote by the average of the initial values of the sensors .however , one rarely has a complete view of the network .the problem of distributed averaging therefore becomes typically to compute _ at each sensor _ by distributed linear iterations . in what follows we review the main convergence results for distributed consensus algorithms on both fixed and random network topologies .we model the static network topology as an undirected graph with nodes corresponding to sensors .an edge is drawn if and only if sensor can communicate with sensor .we denote the set of neighbors for node as .unless otherwise stated , we assume that each graph is simple i.e. , no loops or multiple edges are allowed . in this work ,we consider distributed linear iterations of the following form for , where represents the value computed by sensor at iteration .since the sensors communicate in each iteration , we assume that they are synchronized .the parameters denote the edge weights of . since each sensor communicates only with its direct neighbors , when .the above iteration can be compactly written in the following form or more generally we call the matrix that gathers the edge weights , as the weight matrix .note that is a sparse matrix whose sparsity pattern is driven by the network topology .we assume that is symmetric , and we denote its eigenvalue decomposition as .the ( real ) eigenvalues can further be arranged as follows : the distributed linear iteration given in eq . ( [ eq : distliniter2 ] ) converges to the average if and only if where is the vector of ones . indeed , notice that in this case it has been shown that for fixed network topology the convergence rate of eq .( [ eq : distliniter2 ] ) depends on the magnitude of the second largest eigenvalue .the asymptotic convergence factor is defined as and the per - step convergence factor is written as furthermore , it has been shown that the convergence rate relates to the spectrum of , as given by the following theorem .[ thm : convfixed ] the convergence given by eq .( [ eq : condw ] ) is guaranteed if and only if where denotes the spectral radius of a matrix .furthermore , according to the above theorem , is a left and right eigenvector of associated with the eigenvalue one , and the magnitude of all other eigenvalues is strictly less than one .note finally , that since is symmetric , the asymptotic convergence factor coincides with the per - step convergence factor , which implies that the relations ( [ eq : condasym ] ) and ( [ eq : condstep ] ) are equivalent .we give now an alternate proof of the above theorem that illustrates the importance of the second largest eigenvalue in the convergence rate .we expand the initial state vector to the orthogonal eigenbasis of ; that is , where and .we further assume that .then , eq . ( [ eq : x_t ] ) implies that observe now that if , then in the limit , the second term in the above equation decays and we see that the smaller the value of , the faster the convergence rate .analogous convergence results hold in the case of dynamic network topologies discussed next .let us consider now networks with random link failures , where the state of a link changes over the iterations .in particular , we use the random network model proposed in .we assume that the network at any arbitrary iteration is , where denotes the edge set at iteration , or equivalently at time instant . since the network is dynamic , the edge set changes over the iterations , as links fail at random .we assume that , where is the set of realizable edges when there is no link failure .we also assume that each link fails with a probability , independently of the other links .two random edge sets and at different iterations and are independent . the probability of forming a particular is thus given by .we define the matrix as the matrix is symmetric and its diagonal elements are zero , since it corresponds to a simple graph .it represents the probabilities of edge formation in the network , and the edge set is therefore a random subset of driven by the matrix .finally , the weight matrix becomes dependent on the edge set since only the weights of existing edges can take non zero values . in the dynamic case , the distributed linear iteration of eq .( [ eq : distliniter1 ] ) becomes or in compact form , where denotes the weight matrix corresponding to the graph realization of iteration .the iterative relation given by eq .( [ eq : distliniter2dyn ] ) can be written as clearly , now represents a stochastic process since the edges are drawn randomly .the convergence rate to the consensus solution therefore depends on the behavior of the product .we say that the algorithm converges if we review now some convergence results from , which first shows that for any , it leads to the following convergence theorem for dynamic networks .[ thm : convdyn ] if < 1 ] is also a necessary and sufficient condition for asymptotic ( almost sure ) convergence of the consensus algorithm in the case of random networks , where both network topology and weights are random ( in particular i.i.d and independent over time ) .finally , it is interesting to note that the consensus problem in a random network relates to gossip algorithms . distributed averaging under the synchronous gossip constraint implies that multiple node pairs may communicate simultaneously only if these node pairs are disjoint . in other words ,the set of links implied by the active node pairs forms a matching of the graph .therefore , the distributed averaging problem described above is closely related to the distributed synchronous algorithm under the gossip constraint that has been proposed in ( * ? ? ?it has been shown in this case that the averaging time ( or convergence rate ) of a gossip algorithm depends on the second largest eigenvalue of a doubly stochastic network matrix .as we have seen above , the convergence rate of the distributed consensus algorithms depends in general on the spectral properties of an induced network matrix .this is the case for both fixed and random network topologies .most of the research work has been devoted to finding weight matrix for accelerating the convergence to the consensus solution when sensors only use their current estimates .we choose a different approach where we exploit the memory of sensors , or the values of previous estimates in order to augment to convergence rate .therefore , we have proposed in our previous work the scalar epsilon algorithm ( sea ) for accelerating the convergence rate to the consensus solution .sea belongs to the family of extrapolation methods for accelerating vector sequences , such as eq .( [ eq : distliniter2 ] ) .these methods exploit the fact that the fixed point of the sequence belongs to the subspace spanned by any consecutive terms of it , where is the degree of the minimal polynomial of the sequence generator matrix ( for more details , see and references therein ) .sea is a low complexity algorithm , which is ideal for sensor networks and it is known to reach the consensus solution in steps .however , is unknown in practice , so one may use all the available terms of the vector sequence . hence , the memory requirements of sea are , where is the number of terms .moreover , sea assumes that the sequence generator matrix ( e.g. , in the case of eq .( [ eq : distliniter2 ] ) ) is fixed , so that it does not adapt easily to dynamic network topologies . in this paper, we propose a more flexible algorithm based on the polynomial filtering technique .polynomial filtering permits to shape " the spectrum of a certain symmetric weight matrix , in order to accelerate the convergence to the consensus solution . similarly to sea, it allows the sensors to use the value of their previous estimates .however , the polynomial filtering methodology introduced below presents three main advantages : ( i ) it is robust to dynamic topologies ( ii ) it has explicit control on the convergence rate and ( iii ) its memory requirements can be adjusted to the memory constraints imposed by the sensor .starting from a given ( possibly optimal ) weight matrix , we propose the application of a polynomial filter on the spectrum of in order to impact the magnitude of that mainly drives the convergence rate .denote by the polynomial filter of degree that is applied on the spectrum of , accordingly , the matrix polynomial is given as observe now that which implies that the eigenvalues of are simply the polynomial filtered eigenvalues of i.e. , .in the implementation level , working on implies a periodic update of the current sensor s value with a linear combination of its previous values . to see why this is true, we observe that : a careful design of may impact the convergence rate dramatically .then , each sensor typically applies polynomial filtering for distributed consensus by following the main steps tabulated in algorithm [ algo : pfconsensus ] .[ 1 ] * input * : polynomial coefficients , tolerance . *output * : average estimate .* initialization * : .set the iteration index . . . increase the iteration index . . note that , for both fixed and random network topology cases , the s are computed off - line assuming that and respectively ] and we impose smoothness constraints of at the left endpoint .in particular , the polynomial \rightarrow \mathbb{r} ] .since depends on a dynamic edge set , now becomes stochastic .following the same intuition as above , we could form an optimization problem , similar to opt1 , whose objective function would be ] , which is much easier to evaluate .let denote the average weight matrix ] to be small too .additionally , the authors provide experimental evidence in , which indicates that seems to be closely related to the convergence rate of eq .( [ eq : distliniter2dyn ] ) .based on the above facts , we propose to build our polynomial filter based on .hence , we formulate the following optimization problem for computing the polynomial coefficients s in the random network topology case .opt3 could be viewed as the analog of opt1 for the case of dynamic network topology .the main difference is that we work on , whose eigenvalues can be easily obtained . using again the auxiliary variable , we reach the following formulation for obtaining the s .once has been computed , this optimization problem is solved efficiently by a sdp similarly to the case of static networks .in this section , we provide simulation results which show the effectiveness of the polynomial filtering methodology .first we introduce a few weight matrices that have been extensively used in the distributed averaging literature .suppose that denotes the degree of the -th sensor .it has been shown in that iterating eq .( [ eq : distliniter2 ] ) with the following matrices leads to convergence to . * _ maximum - degree _ weights .the maximum - degree weight matrix is * _ metropolis _ weights .the metropolis weight matrix is * _ laplacian _ weights .suppose that is the adjacency matrix of and is a diagonal matrix which holds the vertex degrees .the laplacian matrix is defined as and the laplacian weight matrix is defined as where the scalar must satisfy .the sensor networks are built using the random geographic graph model .in particular , we place nodes uniformly distributed on the 2-dimensional unit area .two nodes are adjacent if their euclidean distance is smaller than in order to guarantee connectedness of the graph with high probability . finally , the sdp programs for optimizing the polynomial filters are solved in matlab using the sedumi solver .we illustrate first the effect of polynomial filtering on the spectrum of .we build a network of sensors and we apply polynomial filtering on the maximum - degree weight matrix , given in ( [ eq : maxdeg ] ) .we use and we solve the optimization problem opt2 using the maximum - degree matrix as input .figure [ fig : polfiltmaxdeg ] shows the obtained polynomial filter , when $ ] .next , we apply the polynomial on and figure [ fig : eigsmaxdeg ] shows the spectrum of before ( star - solid line ) and after ( circle - solid line ) polynomial filtering , versus the vector index .observe that polynomial filtering dramatically increases the spectral gap , which further leads to accelerating the distributed consensus , as we show in the simulations that follow .then we compare the performance of the different distributed consensus algorithms , with all the aforementioned weight matrices ; that is , maximum - degree , metropolis and laplacian weight matrices for distributed averaging .we compare both newton s polynomial and the sdp polynomial ( obtained from the solution of opt2 ) with the standard iterative method , which is based on successive iterations of eq .( [ eq : distliniter2 ] ) . for the sake of completeness, we also provide the results of the scalar epsilon algorithm ( sea ) that uses all previous estimates .first , we explore the behavior of polynomial filtering methods under variable degree from 2 to 6 with step 2 .we use the laplacian weight matrix for this experiment .figures [ fig : exper6sdp ] and [ fig : exper6herm ] illustrate the evolution of the absolute error versus the iteration index , for polynomial filtering with sdp and newton s polynomials respectively .we also provide the curve of the standard iterative method as a baseline .observe first that both polynomial filtering methods outperform the standard method by exhibiting faster convergence rates , across all values of .notice also , that the degree governs the convergence rate , since larger implies more effective filtering and therefore faster convergence . finally , the stagnation of the convergence process of the sdp polynomial filtering and large values of is due to the limited accuracy of the sdp solver .next , we show the results obtained with the other two weight matrices on the same sensor network. figures [ fig : exper1maxdeg ] and [ fig : exper1metr ] show the convergence behavior of all methods for the maximum - degree and metropolis matrices respectively . in both polynomial filtering methods we use a representative value of , namely 4 .notice again that polynomial filtering accelerates the convergence of the standard iterative method ( solid line ) .as expected , the optimal polynomial computed with sdp outperforms newton s polynomial , which is based on intuitive arguments only .finally , we can see from figures [ fig : fixedtopologyvariablek ] and [ fig : fixedtopology ] that in some cases the convergence rate is comparable for sea and sdp polynomial filtering .note however that the former uses all previous iterates , in contrast to the latter that uses only the most recent ones .hence , the memory requirements are smaller for polynomial filtering , since they are directly driven by .this moreover allows more direct control on the convergence rate , as we have seen in fig .[ fig : fixedtopologyvariablek ] .interestingly , we see that the convergence process is smoother with polynomial filtering , which further permits easy extension to dynamic network topologies .we study now the performance of polynomial filtering for dynamic networks topologies .we build a sequence of random networks of sensors , and we assume that in each iteration the network topology changes independently from the previous iterations , with probability and with probability it remains the same as in the previous iteration .we compare all methods for different values of the probability .we use the laplacian weight matrix ( [ eq : lapweight ] ) . in the sdp polynomial filtering method, we solve the sdp program opt4 ( see sec .[ sec : pfrandomtopology ] ) .[ fig : randomtopology ] shows the average performance of polynomial filtering for some representative values of the degree and the probability .the average performance is computed using the median over the 100 experiments .we have not reported the performance of the sea algorithm , since it is not robust to changes of the network topology .notice that when ( i.e. , each sensor uses only its current value and the right previous one ) polynomial filtering accelerates the convergence over the standard method . at the same time, it stays robust to network topology changes .also , observe that in this case , the sdp polynomial outperforms newton s polynomial .however , when , the roles between the two polynomial filtering methods change as the probability increases .for instance , when , the sdp method even diverges .this is expected if we think that the coefficients of newton s polynomial are computed using hermite interpolation in a given interval and they do not depend on the specific realization of the underlying weight matrix .thus , they are more generic than those of the sdp polynomial that takes into account , and therefore less sensitive to the actual topology realization . algorithms based on optimal polynomialfiltering become inefficient in a highly dynamic network , whose topology changes very frequently .several works have studied the convergence rate of distributed consensus algorithms . in particular , the authors in and have shown that the convergence rate depends on the second largest eigenvalue of the network weight matrix , for fixed and random networks , respectively .they both use semi - definite programs to compute the optimal weight matrix , and the optimal topology .other works have addressed the consensus problem , and we mention here only the most relevant ones .a. olshevsky and j. n. tsitsiklis in propose two consensus algorithms for fixed network topologies , which build on the agreement algorithm " .the proposed algorithms make use of spanning trees and the authors bound their worst - case convergence rate . for dynamic network topologies, they propose an algorithm which builds on a previously known distributed load balancing algorithm . in this case , the authors show that the algorithm has a polynomial bound on the convergence time ( -convergence ) .the authors in study the convergence properties of agreement over random networks following the erds and rnyi random graph model . according to this model , each edge of the graph exists with probability , independently of other edges andthe value of is the same for all edges . by agreement, we consider the case where all nodes of the graph agree on a particular value .the authors employ results from stochastic stability in order to establish convergence of agreement over random networks .also , it is shown that the rate of convergence is governed by the expectation of an exponential factor , which involves the second smallest eigenvalue of the laplacian of the graph .gossip algorithms have also been applied successfully to solving distributed averaging problems . in provide convergence results on randomized gossip algorithm in both synchronous and asynchronous settings .based on the obtained results , they optimize the network topology ( edge formation probabilities ) in order to maximize the convergence rate of randomized gossip . this optimization problem is also formulated as a semi - definite program ( sdp ) . in a recent study ,the authors in have been able to improve the standard gossip protocols in cases where the sensors know their geometric positions .the main idea is to exploit geographic routing in order to aggregate values among random nodes that are far away in the network . under the same assumption of knowing the geometric positions of the sensors , the authors in propose a fast consensus algorithm for geographic random graphs . in particular , they utilize location information of the sensors in order to construct a nonreversible lifted markov chain that mixes faster than corresponding reversible chains .the main idea of lifting is to distinguish the graph nodes from the states of the markov chain and to split " the states into virtual states that are connected in such a way that permits faster mixing .the lifted graph is then projected " back to the original graph , where the dynamics of the lifted markov chain are simulated subject to the original graph topology .however , the proposed algorithm is not applicable in the case where the nodes geographic location is not available .in the authors propose a cluster - based distributed averaging algorithm , applicable to both fixed linear iteration and random gossiping .the induced overlay graph that is constructed by clustering the nodes is better connected relatively to the original graph ; hence , the random walk on the overlay graph mixes faster than the corresponding walk on the original graph . along the same lines ,k. jung et al . in , have used nonreversible lifted markov chains to accelerate consensus .they use the lifting scheme of and they propose a deterministic gossip algorithm based on a set of disjoint maximal matchings , in order to simulate the dynamics of the lifted markov chain . finally , even if we have mostly considered synchronous algorithms in this paper , it is worth mentioning that the authors in propose two asynchronous algorithms for distributed averaging .the first algorithm is based on blocking ( that is , when two nodes update their values they block until the update has been completed ) and the other algorithm drops the blocking assumption .the authors show the convergence of both algorithms under very general asynchronous timing assumptions .moreover , the authors in propose _ consensus propagation _, which is an asynchronous distributed protocol that is a special case of belief propagation . in the case of singly - connected graphs ( i.e. , connected with no loops ), synchronous consensus propagation converges in a number of iterations that is equal to the diameter of the graph .the authors provide convergence analysis for regular graphs .in this paper , we proposed a polynomial filtering methodology in order to accelerate distributed average consensus in both fixed and random network topologies .the main idea of polynomial filtering is to shape the spectrum of the polynomial weight matrix in order to minimize its second largest eigenvalue and subsequently increase the convergence rate .we have constructed semi - definite programs to compute the optimal polynomial coefficients in both static and dynamic networks .simulation results with several common weight matrices have shown that the convergence rate is much higher than for state - of - the - art algorithms in most scenarios , except in the specific case of highly dynamic networks and small memory sensors .the first author would like to thank prof .yousef saad for the valuable and insightful discussions on polynomial filtering .i. d. schizas , a. ribeiro , and g. b. giannakis , `` consensus in ad hoc wsns with noisy links - part i : distributed estimation of deterministic signals , '' _ ieee transactions on singal processing _ ,56 , no . 1 ,350364 , january 2008 .m. rabbat , j. haupt , a. singh , and r. nowak , `` decentralized compression and predistribution via randomized gossiping , '' _5th acm int . conf . on information processing in sensor networks ( ipsn )_ , pp . 51 59 , april 2006 . v. d. blondel , j. m. hendrickx , a. olshevsky , and j. n. tsitsiklis , `` convergence in multiagent coordination , consensus and flocking , '' _ ieee conf . on decision and control , and the european control conference, pp . 29963000 , december 2005 .l. xiao , s. boyd , and s. lall , `` a scheme for robust distributed sensor fusion based on average consensus , '' _ int .conf . on information processing in sensor networks _ , pp .6370 , april 2005 , los angeles .s. sundaram and c. n. hadjicostis , `` distributed consensus and linear functional calculation in networks : an observability perspective , '' _6th acm int . conf . on information processing in sensor networks ( ipsn )_ , april 25 - 27 2007 .j. f. sturm , `` implementation of interior point methods for mixed semidefinite and second order cone optimization problems , '' _ econpapers 73 _ , august 2002 , tilburg university , center for economic research .m. mehyar , d. spanos , j. pongsajapan , s. h. low , and r. m. murray , `` asynchronous distributed averaging on communication networks , '' _ ieee / acm transactions on networking _ , vol .15 , no . 3 , pp .512520 , june 2007 .
|
in the past few years , the problem of distributed consensus has received a lot of attention , particularly in the framework of ad hoc sensor networks . most methods proposed in the literature address the consensus averaging problem by distributed linear iterative algorithms , with asymptotic convergence of the consensus solution . the convergence rate of such distributed algorithms typically depends on the network topology and the weights given to the edges between neighboring sensors , as described by the network matrix . in this paper , we propose to accelerate the convergence rate for given network matrices by the use of polynomial filtering algorithms . the main idea of the proposed methodology is to apply a polynomial filter on the network matrix that will shape its spectrum in order to increase the convergence rate . such an algorithm is equivalent to periodic updates in each of the sensors by aggregating a few of its previous estimates . we formulate the computation of the coefficients of the optimal polynomial as a semi - definite program that can be efficiently and globally solved for both static and dynamic network topologies . we finally provide simulation results that demonstrate the effectiveness of the proposed solutions in accelerating the convergence of distributed consensus averaging problems .
|
the realization of fully autonomous robots will require algorithms that can learn from direct experience obtained from visual input .vision systems provide a rich source of information , but , the piecewise - continuous ( pwc ) structure of the perceptual space ( e.g. video images ) implied by typical mobile robot environments is not compatible with most current , on - line reinforcement learning approaches .these environments are characterized by regions of smooth continuity separated by discontinuities that represent the boundaries of physical objects or the sudden appearance or disappearance of objects in the visual field .there are two broad approaches that are used to adapt existing algorithms to real world environments : ( 1 ) discretizing the state space with fixed or adaptive grids , and ( 2 ) using a function approximator such as a neural - network , radial basis functions ( rbfs ) , cmac , or instance - based memory .fixed discrete grids introduce artificial discontinuities , while adaptive ones scale exponentially with state space dimensionality .neural networks implement relatively smooth global functions that are not capable of approximating discontinuities , and rbfs and cmacs , like fixed grid methods , require knowledge of the appropriate local scale .instance - based methods use a _ neighborhood _ of explicitly stored experiences to generalize to new experiences .these methods are more suitable for our purposes because they implement local models that in principle can approximate pwc functions , but typically fall short because , by using a fixed neighborhood radius , they assume a uniform sampling density on the state space .a fixed radius prevents the approximator from clearly identifying discontinuities because points on both sides of the discontinuity can be averaged together , thereby blurring its location .if instead we use a fixed number of neighbors ( in effect using a variable radius ) the approximator has arbitrary resolution near important state space boundaries where it is most needed to accurately model the local dynamics . to use such an approach ,an appropriate metric is needed to determine which stored instances provide the most relevant information for deciding what to do in a given situation .apart from the pwc structure of the perceptual space , a robot learning algorithm must also cope with the fact that instantaneous sensory readings alone rarely provide sufficient information for the robot to determine where it is ( localization problem ) and what action it is best to take . some form of short - term memory is needed to integrate successive inputs and identify the underlying environment states that are otherwise only _ partially observable_. in this paper , we present an algorithm called piecewise continuous nearest sequence memory ( pc - nsm ) that extends mccallum s instance - based algorithm for discrete , partially observable state spaces , nearest sequence memory ( nsm ; ) , to the more general pwc case . like nsm , pc - nsm stores all the data it collects from the environment , but uses a continuous metric on the history that allows it to be used in real robot environments without prior discretization of the perceptual space . an important priority in this work is minimizing the amount of _ a priori _ knowledge about the structure of the environment that is available to the learner .typically , artificial learning is conducted in simulation , and then the resulting policy is transfered to the real robot .building an accurate model of a real environment is human - resource intensive and only really achievable when simple sensors are used ( unlike full - scale vision ) , while overly simplified models make policy transfer difficult .for this reason , we stipulate that the robot must learn directly from the real world . furthermore , since gathering data in the real world is costly , the algorithm should be capable of efficient autonomous exploration in the robot perceptual state space without knowing the amount of exploration required in different parts of the state space ( as is normally the case in even the most advanced approaches to exploration in discrete , and even in metric state spaces ) .the next section introduces pc - nsm , section [ sec : exp ] presents our experiments in robot navigation , and section [ sec : discussion ] discusses our results and future directions for our research .in presenting our algorithm , we first briefly review the underlying learning mechanism , -learning , then describe nearest sequence memory which extends -learning to discrete pomdps , and forms the basis of our pc - nsm .the basic idea of -learning , originally formulated for finite discrete state spaces , is to incrementally estimate the value of state - action pairs , -values , based on the reward received from the environment and the agent s previous -value estimates .the update rule for -values is q_t+1(s_t , a_t ) = ( 1- ) q_t(s_t , a_t ) + where is the -value estimate at time of the state and action , is a learning rate , and ] to accommodate the metric we introduce in the next section .the observation states -nearest to for each possible action at time form a neighborhood that is used to compute the -value for the corresponding action by : [ eq : qval ] q(h_t , a ) = ( 1|n_a^h_t| _ h_tn_a^h_t q(h_t ) ) , where is a local estimate of at the state - action pair that occurred at time .after an action has been selected according to -values ( e.g. the action with the highest value ) , the -values are updated : [ eq : update ] q(h_i ) : = ( 1-)q(h_i ) + ( r_i + _ a q(h_t , a ) ) , h_i n__t^h_t .nsm has been demonstrated in simulation , but has never been run on real robots . using history to resolve perceptualaliasing still requires considerable human programming effort to produce reasonable discretization for real - world sensors . in the followingwe avoid the issue of discretization by selecting an appropriate metric in the continuous observation space . perform perform the distance measure used in nsm ( equation [ eq : nsm - metric ] ) was designed for discrete state spaces . in the continuous perceptual space where our robot must learn , this metric is inadequate since most likely all the triples will be different from each other and will always equal 1 .therefore , to accommodate continuous states , we replace equation [ eq : nsm - metric ] with the following discounted metric : [ eq : discounted - metric ] ( h_t , h_t^ ) = _= 0^min(t , t^ ) ^ ||o_t -- o_t^-||_2 , where ] and \in\mathbb{r} ] determines the greediness of the policy . the algorithm differs most importantly from nsm in using the discounted metric ( line 8) , and in the way exploratory actions in the -greedy policy are chosen ( line 12 ) .the exploratory action is the action whose neighborhood has the highest average distance from the current observation - state , i.e. the action about which there is the least information .this policy induces what has been called _balanced wandering _ .if the -values are only updated during interaction with the real environment , learning can be very slow since updates will occur at the robot s control frequency ( i.e. the rate at which the agent takes actions ) .one way to more fully exploit the information gathered from the environment is to perform updates on the stored history between normal updates .we refer to these updates as _ endogenous _ because they originate within the learning agent , unlike normal , _ exogenous _ updates which are triggered by `` real '' events outside the agent . during learning ,the agent selects random times , and updates the -value of according to equation [ eq : update ] where the maximum -value of the next state is computed using equation [ eq : qval ] ( see lines 1821 in algorithm [ alg : tra ] ) .this approach is similar to the dyna architecture in that the history acts as a kind of model , but , unlike dyna , the model does not generate new experiences , rather it re - updates those already in the history in a manner similar to experience replay .we demonstrate pc - nsm on a mobile robot task where a csem robotics smartease + robot must use video input to identify and navigate to a target object while avoiding obstacles and walls .because the camera provides only a partial view of the environment , this task requires the robot to use its history of observations to remember both where it has been , and where it last saw the target if the target moves out of view .[ sec : uair ] the experiments were conducted in the 3x4 meter walled arena shown in figure [ fig : scenario ] .the robot is equipped with two ultrasound distance sensors ( one facing forward , one backward ) , and a vision system based on the axis 2100 network camera that is mounted on top of the robot s 28 cm diameter cylindrical chassis .learning was conducted in a series of trials where the robot , obstacle(s ) , and target ( blue teapot ) were placed at random locations in the arena . at the beginning of each trial , the robot takes a sensor reading and sends , via wireless , the camera image to a _ vision computer _ , and the sonar readings to a _ learning computer_. the vision computer extracts the - coordinates of the target in the visual field by calculating the centroid of pixels of the target color ( see figure[fig : policy ] ) , and passes them on to the learning computer , along with a predicate indicating whether the target is visible . if is false , ==0 .the learning computer merges , and with the forward and backward sonar readings , and , to form the * inputs * to pc - nsm : an observation vector , where and are normalized to ] .pc - nsm then selects one of 8 * actions * : turn left or right by either or , and move forward or backward either 5 cm or 15 cm ( approximately ) .this action set was chosen to allow the algorithm to adapt to the scale of environment .the selected action is sent to the robot , the robot executes the action , and the cycle repeats . when the robot reaches the goal , the goal is moved to a new location , and a new trial begins .the entire interval from sensory reading to action execution is 2.5 seconds , primarily due to camera and network delays . to accommodate this relatively low control frequency , the maximum velocity of the robotis limited to 10 cm / s . during the dead time between actions ,the learning computer conducts as many endogenous updates as time permits .* learned control policy . *each row shows a different situation in the environment along with its corresponding learned policy . in the toprow the robot is positioned directly in front of the target object .the crosses in the camera image mark detected pixels of the target color , and the circle indicates the assumed direction towards the target . the policy for this situationis shown in terms of the visual coordinates , i.e. only the - camera view coordinates of the high dimensional policy are shown .each point in the policy graph indicates , with an arrow , the direction the robot should move if the circle , shown in the image is at that point in the visual field ( left arrow means move left , right = right , up = forward , down = backwards , and no arrow = stand still .for instance , in this case , the robot should move forward because the circle lies in a part of the policy with an up arrow . in the bottom rowthe robot is almost touching the target . herethe policy is shown in terms of the subspace spanned by the two ultrasound distance sensors found at the fore and aft of the robot .the -axis is the distance from the robot to the nearest obstacle in front , the -axis behind .when the robot is with its back to an obstacle , and the way forward is clear ( upper left corner of policy graph ) , it tends to go forward .when the way forward is obstructed , but there is nothing behind the robot ( lower right corner ) , the robot tends to turn or move backward . , title="fig:",scaledwidth=48.0% ] * learned control policy . *each row shows a different situation in the environment along with its corresponding learned policy .in the top row the robot is positioned directly in front of the target object .the crosses in the camera image mark detected pixels of the target color , and the circle indicates the assumed direction towards the target . the policy for this situationis shown in terms of the visual coordinates , i.e. only the - camera view coordinates of the high dimensional policy are shown .each point in the policy graph indicates , with an arrow , the direction the robot should move if the circle , shown in the image is at that point in the visual field ( left arrow means move left , right = right , up = forward , down = backwards , and no arrow = stand still .for instance , in this case , the robot should move forward because the circle lies in a part of the policy with an up arrow . in the bottom rowthe robot is almost touching the target . herethe policy is shown in terms of the subspace spanned by the two ultrasound distance sensors found at the fore and aft of the robot .the -axis is the distance from the robot to the nearest obstacle in front , the -axis behind .when the robot is with its back to an obstacle , and the way forward is clear ( upper left corner of policy graph ) , it tends to go forward . when the way forward is obstructed , but there is nothing behind the robot ( lower right corner ) , the robot tends to turn or move backward . ,title="fig:",scaledwidth=48.5% ] * learned control policy . *each row shows a different situation in the environment along with its corresponding learned policy . in the toprow the robot is positioned directly in front of the target object .the crosses in the camera image mark detected pixels of the target color , and the circle indicates the assumed direction towards the target . the policy for this situationis shown in terms of the visual coordinates , i.e. only the - camera view coordinates of the high dimensional policy are shown .each point in the policy graph indicates , with an arrow , the direction the robot should move if the circle , shown in the image is at that point in the visual field ( left arrow means move left , right = right , up = forward , down = backwards , and no arrow = stand still . for instance , in this case , the robot should move forward because the circle lies in a part of the policy with an up arrow . in the bottom rowthe robot is almost touching the target . herethe policy is shown in terms of the subspace spanned by the two ultrasound distance sensors found at the fore and aft of the robot .the -axis is the distance from the robot to the nearest obstacle in front , the -axis behind .when the robot is with its back to an obstacle , and the way forward is clear ( upper left corner of policy graph ) , it tends to go forward . when the way forward is obstructed , but there is nothing behind the robot ( lower right corner ) , the robot tends to turn or move backward . , title="fig:",scaledwidth=48.0% ] *learned control policy . *each row shows a different situation in the environment along with its corresponding learned policy . in the toprow the robot is positioned directly in front of the target object .the crosses in the camera image mark detected pixels of the target color , and the circle indicates the assumed direction towards the target . the policy for this situationis shown in terms of the visual coordinates , i.e. only the - camera view coordinates of the high dimensional policy are shown .each point in the policy graph indicates , with an arrow , the direction the robot should move if the circle , shown in the image is at that point in the visual field ( left arrow means move left , right = right , up = forward , down = backwards , and no arrow = stand still . for instance , in this case , the robot should move forward because the circle lies in a part of the policy with an up arrow . in the bottom row the robot is almost touching the target . herethe policy is shown in terms of the subspace spanned by the two ultrasound distance sensors found at the fore and aft of the robot .the -axis is the distance from the robot to the nearest obstacle in front , the -axis behind .when the robot is with its back to an obstacle , and the way forward is clear ( upper left corner of policy graph ) , it tends to go forward .when the way forward is obstructed , but there is nothing behind the robot ( lower right corner ) , the robot tends to turn or move backward ., title="fig:",scaledwidth=48.5% ] pc - nsm uses an -greedy policy ( algorithm [ alg : tra ] , line 13 ) , with set to 0.3 .this means that 30% of the time the robot selects an exploratory action .the appropriate number of nearest neighbors , , used to select actions , depends upon the noisiness of the environment .the lower the noise , the smaller the that can be chosen . for the amount of noise in our sensors, we found that learning was fastest for .a common practice in toy reinforcement learning tasks such as discrete mazes is to use minimal reinforcement so that the agent is rewarded only when it reaches the goal .while such a formulation is useful to test algorithms in simulation , for real robots , this sparse , delayed reward forestalls learning as the agent can wander for long periods of time without reward , until finally happening upon the goal by accident .often there is specific domain knowledge that can incorporated into the reward function to provide intermediate reward that facilitates learning in robotic domains where exploration is costly .the reward function we use is the sum of two components , one is obstacle - related , , and the other is target - related , : [ eq : reward ] r = _ r _ + _ r _ is largest when the robot is near to the goal and is looking directly towards it , smaller when the target is visible in the middle of the field of view , even smaller when the target is visible , but not in the center , and reaches its minimum when the target is not visible at all . is negative when the robot is too close to some obstacle , except when the obstacle is the target itself , visible by the robot .it is important to note that the coefficients in equation [ eq : reward ] are specific to the robot and not the environment .they represent a one - time calibration of pc - nsm to the robot hardware being used .[ sec : er ] after taking between 1500 and 3000 actions the robot learns to avoid walls , reduce speed when approaching walls , look around for the goal , and go to the goal whenever it sees it .this is much faster compared to neural network based learners , e.g. , where 4000 episodes were required ( resulting in more than 100000 actions ) to solve a simpler task in which the target was always within the perceptual field of the robot . neither do we need a virtual model environment and manual quantization of the state space like in . to our knowledge ,our results are the fastest in terms of learning speed and use least quantization effort compared to all other methods to date , though we were unable to compare results directly on the hardware used by these competing approaches . in the beginning of learning ,corners pose serious difficulty causing the robot to get stuck and receive negative reinforcement for being too close to a wall .when the robot accidentally turns towards the target , it will quickly lose track of it .as learning progresses , the robot is able to recover ( usually within one action ) when an exploratory action causes it to turn away and loose sight of the target .the discounted metric allows the robot to use its history of real - valued observation states to remember that it had just seen the target in the recent past .figure [ fig : policy ] shows the learned policy for this task .since the robot state space is perception - based ( not - coordinates on the floor as is the case in rl textbook examples ) , changing the position of the obstacles or target does not impede robot performance .figure [ fig : reward ] shows learning in terms of immediate and average reward for a typical sequence of trials lasting a total of approximately 70 minutes .the dashed vertical lines in the two graphs indicate the beginning of a new trial .as learning progresses the robot is able to generalize from past experience and more quickly find the goal .after the first two trials , the robot starts to accumulate reward more rapidly in the third , after which the fourth trial is completed with very little deliberation .figure [ fig : scenario ] illustrates two such successful trials . *pc - nsm learning performance*. ( a ) the plot shows the reward the robot receives at each time - step during learning .( b ) the plot shows the reward at each time - step averaged over all previous time - steps within the same trial .the dashed lines indicate the beginning of a new trial where the target is moved to a new location . ]\(a ) ( b ) ( a ) ( b )we have developed a instance - based algorithm for mobile robot learning and successfully implemented it on an actual vision - controlled robot .the use of a metric state space allows our algorithm to work under weaker requirements and be more data - efficient compared to previous work in continuous reinforcement learning .using a metric instead of a discrete grid is a considerable relaxation of the programmer s task , since it obviates the need to guess the correct scale for all the regions of the state space in advance .the algorithm explores the environment and learns directly on a mobile robot without using a hand - made computer model as an intermediate step , works in piecewise continuous perceptual spaces , and copes with partial observability .the metric used in this paper worked well in our experiments , but a more powerful approach would be to allow the algorithm to select the appropriate metric for a given environment and task automatically . to choose between metrics , a criterion should be defined that determines which of a set of _ a priori _ equiprobable metrics fits the given history of experimentation better .a useful criterion could be , for example , a generalization of the criteria used in the mccallum s u - tree algorithm to decide whether a state should be split .the current algorithm uses discrete actions so that there is a convenient way to group observation states .if the action space were continuous , the algorithm lacks a natural way to generalize between actions .a metric on the action space could be used within the observation - based neighborhood delimited by the current metric .the agent could then randomly sample possible actions at the query point and obtain q - values for each sampled action by computing the -nearest neighbors within the -neighborhood .future work will explore this avenue . c. anderson .-learning with hidden - unit restarting . in s.j. hanson , j. d. cowan , and c. l. giles , editors , _ advances in neural information processing systems 5 _ , pages 8188 , san mateo , ca , 1993 .morgan kaufmann .m. iida , m. sugisaka , and k. shibata .application of direct - vision - based reinforcement learning to a real mobile robot with a ccd camera . in _ proc . of arob ( intl symp . on artificial life and robotics ) 8th _ , pages 8689 , 2003 .s. kakade , m. kearns , and j. langford .exploration in metric state spaces . in _machine learning , proceedings of the twentieth international conference ( icml 2003 ) , august 21 - 24 , 2003 , washington , dc , usa_. aaai press , 2003 .r. a. mccallum .instance - based state identification for reinforcement learning . in g.tesauro , d. touretzky , and t. leen , editors , _ advances in neural information processing systems _ , volume 7 , pages 377384 . the mit press , 1995 . r. a. mccallum .learning to use selective attention and short - term memory in sequential tasks . in p.maes , m. mataric , j .- a .meyer , j. pollack , and s. w. wilson , editors , _ from animals to animats 4 : proceedings of the fourth international conference on simulation of adaptive behavior , cambridge , ma _ , pages 315324 .mit press , bradford books , 1996 .a. w. moore .the parti - game algorithm for variable resolution reinforcement learning in multidimensional state - spaces . in j. d. cowan , g. tesauro , and j. alspector , editors , _ advances in neural information processing systems _ , volume 6 , pages 711718 .morgan kaufmann publishers , inc . , 1994 .s. pareigis. adaptive choice of grid and time in reinforcement learning . in_ nips 97 : proceedings of the 1997 conference on advances in neural information processing systems 10 _ , pages 10361042 , cambridge , ma , usa , 1998 .mit press .r. schoknecht and m. riedmiller . learning to control at multiple time scales . in o.kaynak , e. alpaydin , e. oja , and l. xu , editors , _ artificial neural networks and neural information processing - icann / iconip 2003 , joint international conference icann / iconip 2003 , istanbul , turkey , june 26 - 29 , 2003 , proceedings _ ,volume 2714 of _ lecture notes in computer science_. springer , 2003 .w. d. smart and l. p. kaelbling .practical reinforcement learning in continuous spaces . in _ proc .17th international conf . on machine learning _ , pages 903910 .morgan kaufmann , san francisco , ca , 2000 .r. s. sutton .first results with dyna , an integrated architecture for learning , planning and reacting . in _ proceedings of the aaai spring symposium on planning in uncertain , unpredictable , or changing environments _ , 1990 .r. s. sutton .generalization in reinforcement learning : successful examples using sparse coarse coding . in d.s. touretzky , m. c. mozer , and m. e. hasselmo , editors , _ advances in neural information processing systems 8 _ , pages 10381044 .cambridge , ma : mit press , 1996 .
|
we address the problem of autonomously learning controllers for vision - capable mobile robots . we extend mccallum s ( 1995 ) nearest - sequence memory algorithm to allow for general metrics over state - action trajectories . we demonstrate the feasibility of our approach by successfully running our algorithm on a real mobile robot . the algorithm is novel and unique in that it ( a ) explores the environment and learns directly on a mobile robot without using a hand - made computer model as an intermediate step , ( b ) does not require manual discretization of the sensor input space , ( c ) works in piecewise continuous perceptual spaces , and ( d ) copes with partial observability . together this allows learning from much less experience compared to previous methods . , , and reinforcement learning , mobile robots .
|
this paper presents a computational model to interpret the mozart effect k488 .this effect has been discussed extensively and seriously among both psychology and music perception societies using various experimental techniques [ rauscher et al .1993 ] . instead of experiments, this paper constructs a computation model to resolve the effect .this model starts with a tree structure for quantized rhythm beats based on the theory by loguet - higgins [ longuet - higgins , 1987 ] .the tree is then modeled as an automata and its complexity is derived by way of the l - system [ prusinkiewicz and lindenmayer 1990 , prusinkiewicz 1986 ] .the quantization tree will be briefly introduced in this section .the l - system will be introduced in the next section .the automata rewriting rule associated with the l - system will be included also in the next section .the similarity between trees by way of rewriting rules is defined in section 3 .the tree complexity is derived in section 4 .this complexity serves as a measure for the perception of musical rhythms [ desain and windsor 2000 ; yeston 1976 ] and resolves the effect .the key features of music perception and composition are rhythm , melody and harmony .rhythm is formed through the alternation of long and short notes , or through repetition of strong and weak dynamics [ cooper and meyer 1960 ] . because one metrical unit , such as a measure or a half note , can often be divided into two or three sub - units ( illustrated in figures [ figure 1 ] and [ figure 2 ] ) , this rhythm is endowed with a clear hierarchical structure [ longuet - higgins , 1987 ; lerdahl and jackendoff 1983 ] .note that longuet higgins grammar for rhythm is different from his musical parser for handeling performances , which is far more sophisticated than time - grid round - off . to represent the hierarchical characteristics of rhythm , we need to seek a system that possesses such a nature .fortunately , the plant kingdom is rich with branching structures , in which branches are derived from roots .in fact , the structure shown in figure [ figure 1 ] is that of a binary tree .l - systems ( lindenmayer systems ) [ prusinkiewicz and lindenmayer 1990 , prusinkiewicz 1986 ] are designed to model plant development , see [ mccormack 1993 ] .therefore , it is natural to construct the rhythm representation by using an l - system [ prusinkiewicz 1986 ; worth and stepney 2005 ] .a background of l - systems applied to art and music is in the website in reference .we will show how such a tree structure and its related parts can be constructed .we expect that the l - system can capture the rhythm nature .we now review the music tree by loguet - higgins .figure [ figure 1 ] shows that in each level of a tree a half note is represented by a different metrical unit . in the highest level , the metrical unit is a half note ; in the next level , the unit is a quarter note ; in the lowest level , the unit is an eighth note .the total duration in each level is equal to a half note .this structure can be extended to a measure or more , a composition as in figure [ figure 2 ] .h. c. longuet - higgins introduced this kind of tree in the 1970s .the tree has been extensively studied both experimently and theoretically .note that there are many other theories [ cooper and meyer 1960 ; yeston 1976 ; lerdahl and jackendoff 1983 ; desain and windsor 2000 ] .since we prefer a computational approach to resolve the effect , we will not use their theories . in order to give computers the musicianship necessary to transcribe a melody into a score , he used tree structures to represent rhythmic groupings . in his theory of music perception , the essential task in perceiving the rhythmic structure of a melody is to identify the time of occurrence of each beat .therefore , his theory can be applied to western music with regular beats . in western music ,the most common subdivisions of each beat are into two or three shorter metrical units , and these shorter metrical units can be further subdivided into two or three units .tracking from the start of a melody , when a beat or a fraction of a beat is interrupted by the onset of a note , it is divided into shorter metrical units .after this process of division , every note will find itself at the beginning of an uninterrupted metrical unit .the metrical units can be considered as the nodes of a tree in which each non - terminal node has two or three descendants .the terminal nodes for a beat are the shortest metrical units that the beat is divided into .every terminal node in the tree will eventually be attached either to a rest or to a note sounded or tied .it is natural to include and elaborate rests in the tree model as those done in [ longuet - higgins , 1987 ] .we will employ the perception factors discussed by longuet - higgins , such as tolerance , syncopation , rhythmic ambiguity , regular passages , to construct l - systems for rhythms .a rhythmic tree as described above is a tree of which each subtree is also a rhythmic tree .each tree node has two or three children ( branches or descendants ) .each node in the tree represents the total beat duration that is equal to the sum of all those of its descendants .the root node has a duration length that is equal to the length of the whole note sequence .note that when we attempt to split a note sequence into two subsequences with equal duration lengths , we usually obtain two unequal length subsequences .this is because a note connecting the two subsequences has been split into two submetrical units .the preceding portion belongs to the preceding subsequences and the later portion belongs to the later subsequence .we will mark those units to identify their subsequences .these two subsequences represent two different subtrees of the root node .we further divide each subsequence into sub - subsequences , which are also rhythmic trees , and so on .this dividing process is completed when a tree node contains a single note .this single note may possibly be the one which has been split into two portions .this is in some sense similar to an algorithm for note quantization and is a standard practice in midi rendering of music . in practicewe will quantize notes using the finest note among dotted notes ( e.g. , 1/4 , 1/8 , 1/16 , dotted notes , etc . ) without loosing most of the interesting details .we plot two such trees in figures [ figure 2 ] and [ figure 3 ] .the notes shown in figure [ figure 3 ] are part of the whole tree of the beginning of rachnaminoff s piano concerto no.3 , movement 1 .the two notes in the rectangle have been split using our rhythmic tree process . to express the hierarchical characteristics of rhythm ,we need a data structure that possesses such a hierarchical nature .fortunately , the plant kingdom is dominated by branching structures , in which branches are derived from roots .l - systems are designed to model plant development . therefore , it is practicable to construct a rhythmic representation by using l - systems .the lindenmayer system , or l - system for short , was introduced by the biologist aristid lindenmayer in 1968 [ lindenmayer 1968 ] .it was conceived as a mathematical theory of plant development .the central concept of the l - system is rewriting . in general , rewriting is a technique used to define complex objects by successively replacing parts of a simple initial object , using a set of rewriting rules or productions .the l - system is a new type of string - rewriting mechanism .the essential difference between chomsky grammars and l - systems lies in the technique used to apply productions . in chomsky grammars , productions are applied sequentially , whereas in l - systems , they are applied in parallel and simultaneously to replace all the letters in a given word [ mccormack 1993 ] .this difference reflects the biological motivation of l - systems .productions are intended to capture cell divisions in multi - cellular organisms , where many divisions may occur at the same time . moreover , there are languages which can be generated by context - free l - systems but not by context - free chomsky grammars . herewe introduce the turtle graphical interpretation of l - systems .suppose that there is a turtle crawling on a plane .the state of the turtle is defined as a triplet , where the cartesian coordinates represent the turtle s position , and the angle , called the heading , is interpreted as the direction in which the turtle is facing . given the step size d and the angle increment , the turtle can respond to commands represented by the following symbols : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [ cols= " < , < " , ] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ * * * * table 2 : classifying based on the similarity of rewriting rules ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ in table 1 , rules such as * [+f]} ] * are assigned to class 2 .there are eight such rules before classification , so we write ` * * [+f]}$ ] * * ' .similar rules such as * * , * * , * * are isomorphic on depth , and there are such rules .they are assigned to class 1 .class 3 and class 4 are obtained by following a similar classification procedure .note that this section also presents a new way to convert a context - sensitive grammar to a context - free one .after we list the rewriting rules for a rhythmic tree and classify all those rules , we attempt to explore the redundancy in the tree ( the hidden structure in the beats ) that will be the base for building the cognitive map [ barlow 1989 ] . to accomplish this , we compute the complexity of the tree which those classified rules represent . we know that a classified rewriting rule set is also a context free grammar , so we can define the complexity of a rewriting rule set as follows : * define : topological entropy of a context free grammar .* the topological entropy of a cfg ( context free grammar ) can be evaluated by means of the following three procedure [ kuich 1970 ; badii and politi 1997 ] : ( 1 ) for each variable with productions ( in greibach form ) , where are terminals and are non - terminals . the formal algebraic expression for each variable is _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \(2 ) by replacing every terminal with an auxiliary variable , one obtains the generating function _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ where is the number of words of length descending from .\(3 ) let be the largest one of , all .the above summation series converges when .the topological entropy is given by the radius of convergence as _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ however , we have found that this definition is slightly inconvenient for our binary tree case .thus , we rewrite it as follows : * define : generating function of a context free grammar .* assume that there are classes of rules and that each class contains rules .let , and , where each has the following form : the generating function of , , has a new form as follows : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ if does nt have any non - terminal , we set . with this function , we can define the complexity of the rhythmic tree below .* define : complexity of rhythmic tree [ 6 ] . * after formulate the generating function , we intend to find the largest value of , , at which converges .note that we use to denote the rule for the root node of the rhythmic tree . after obtaining the largest value , , of , we set , the radius of convergence of .we define the complexity of the rhythmic tree as .we use the simple example in tables 1 and 2 ( or figure [ figure 3 ] ) to show the computation procedure of the complexity . according to our definition the given values for the class parameters are . substituting these values in the equation, we have and directly .then we obtain the formulas for , , and successively .they are _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ rearranging the above equation for , we obtain a quadratic equation for _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ solving , we obtain the formula _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the radius of convergence , , and complexity , , can be obtained from this formula .in order to compute the complexity of a rhythmic tree , we have to determine , the radius of convergence of the rhythmic tree s rewriting rule set .we devise strategy to judge whether the function is convergent or divergent for a given value of .we construct an iteration technique to compute the value of this generating function . to facilitate the computation , we rewrite the generating function as follows : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ and . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ here we use superscript in the variable to represent the iteration count .starting with in each iteration , we calculate a new value , .then we calculate , , ... , and successively , where is some positive integer number .when is equal to for all rules , this means that can not be improved anymore , we reach convergence .therefore , , the number we want to judge , is not the radius of convergence for the rules set but is smaller than the radius of convergence . in our simulations , we set .this means that if is not divergent for , then we judge to be convergent .once we can judge whether is convergent or divergent at a number , we can test every real number between 0 and 1 to find the number that is right on the border of the convergent region and use this number to calculate the radius of convergence .we may apply some advanced techniques to search for the radius of convergence , such as binary searching between 0 and 1 .this is exactly the technique we use in our algorithm .now we will present a practical example .we use beethoven s piano sonatas nos . 1 to 32 and mozart s piano sonatas nos . 1 to 19 as an example , andshow their complexity .we list the complexity of each piano sonata by mozart in figures [ figure12]-[figure13 ] . in these figures ,we use two different isomorphic depths , 1 and 3 , to compute the complexity . from the figures we can see that the complexity is high for both composers .when we use higher depth isomorphism to classify rules , the complexity will decrease .this is because when we use higher depth isomorphism , redundancy between rules will decrease so the complexity will also decrease . eventually the complexity will decrease to zero for the highest depth isomorphism .conversely , lower depth isomorphism brings more rules in a class ; redundancy between rules will increase and the number of classes will decrease . if the depth of isomorphism is too low , the rules set will become too simple , thus the complexity will also become lower .we may compute the complexity for different depths to see the differences .beethoven and mozart s work have similar complexity , but beethoven s is slightly higher than mozart s .both their complexity isomorphic on level 2 are the highest .when we use the high level isomorphism to classify rules , the complexity of rules will decrease .reversely , the low level isomorphism collects many rules in a class ; redundancy between rules will increase and the number of classes will decrease . if the level of isomorphism is too low , the rules set will become too simple , thus the complexity will also become lower .we can try different level to see its complexity , and pick up the level with highest complexity .we tested a well - known music work studied by rauscher et al . [ 1993 ] .almost all the previous studies on the mozart effect have focused on a single piece of music , the sonata for two pianos in d major ( k448 ) .we have computed its complexity and found that it is generally higher than that of other sonatas by mozart , see figures [ figure12]-[figure13 ] .we have constructed the complexity for the l - system .this complexity resembles , in some sense , the redundancy [ pollack 1990 ; large et al .1995 ; chalmers 1990 ]. this complexity can facilitate many other studies such as bio - morphology , dna analysis , gene analysis and tree similarity .we closely followed the ideas of barlow [ barlow 1989 ] and feldman [ feldman 2000 ] to design this model . in his work , barlow wrote that : words are to the elements of our sensations like logical functions to the variables that compose them .we can not of course suppose that an animal can form an association with any arbitrary logical function of its sensory messages , but they have capacities that tend in that direction , and it is these capacities that the kind of representative schemes considered here might be able to mimic .human perception sometimes bases on external world s information redundancy .if we can extract any rules or patterns from a certain object as part of our cognition map for that object , it will be easy to memorize or comprehend it . in our model , rhythms resemble the words ; trees resemble the logical functions ; classes resembles rules and patterns ; complexity resembles redundancy . man is not inherently musical , the distinguished scientist newton claimed ; natural singing is the sole property of birds .in contrast to our feathered friends , humans perform and understand only what they taught .. this is why humans listen to music by training .one needs such redundancy to comprehend the music words . but how can we pinpoint the rules or patterns in a music work , or even in a simple rhythm that may be formless ? as an attempt , we have defined homomorphism and isomorphism so as to characterize the similarity between sections of different rhythmic trees .but there still exist questions about the psychological implications of these characteristics , such as the depth of isomorphism .the proposed model can enable us to measure the psychological complexity [ feldman 2000 ] of rhythms . in our studies, we have found that different depths of isomorphism produce varying degree of complexity .if a rhythm is very simple , its complexity will be 0 .the same situation also occurred when we used isomorphism with a very high depth value to compute the complexity of mozart s and beethoven s piano sonatas .in general , the results confirm our intuition about these musical rhythms .we define the similarity between tree structures in section 3 .finding similarity between rules and classifying them in different subsets are in some sense similar to fractal compression , see the website in reference .this could be an alternate way to configure rhythmic complexity .we are still working on this .we are also working on an extension of the model to incorporate the rhythmic complexity for polyphonic music , superposition of different rhythms , tempo variation , grace notes , supra and irregular subdivisions of the beat ( e.g. triplets , quintuplets , ... ) .large , c. palmer and j.b .pollack , reduced memory representations for music , cognitive science , vol .19 , 1995 , pp .reprinted in musical networks : parallel distributed perception and performance , edited by niall griffith and peter m. todd . the mit press , 1999 , cambridge , massachusetts .lee and c .- y .liou , structural analysis of musical rhythm : a comprehensive review , technical report nsc 92 - 2213-e-002 - 002 , department of computer science and information engineering , national taiwan university , 2003 .j. mccormack , interactive evolution of l - system grammars for computer graphics modelling , in book complex systems : from biology to computation , edited by d. g. green and t. bossomaier , 1993 , pp.118 - 130 .p. worth and s. stepney , growing music : musical interpretations of l - systems , evomusart workshop , eurogp , lausanne , switzerland , lncs 3449 , pp .545 - 550 , springer , 2005 .
|
this paper constructs a tree structure for the music rhythm using the l - system . it models the structure as an automata and derives its complexity . it also solves the complexity for the l - system . this complexity can resolve the similarity between trees . this complexity serves as a measure of psychological complexity for rhythms . it resolves the music complexity of various compositions including the mozart effect k488 . keyword : music perception , psychological complexity , rhythm , l - system , automata , temporal associative memory , inverse problem , rewriting rule , bracketed string , tree similarity the number of text pages of the manuscript 21 the number of figures 13 the number of tables 2 : department of computer science and information engineering , national taiwan university , taipei , taiwan , 106 , r.o.c . , tel : 8862 23625336 ext 515 , fax : 8862 23628167 , email : cyliou.ntu.edu.tw
|
this third and last part of the work on generalized poisson - kac ( gpk ) processes and their physical applications extends the analysis developed in parts i and ii , developing the generalization of gpk theory to a broad spectrum of stochastic phenomenologies . with respect to the theory developed in , two lines of attackcharacterize this extension : ( i ) the inclusion of nonlinearities , and ( ii ) the extension to a continuum of states .nonlinearities can be treated in two different ways .the first class of nonlinear models assumes that the state ( position ) variable can influence the basic parameters characterizing stochastic gpk perturbations . in the case of gpk perturbations, this reflects into the functional dependence of , and on . in the case of a position dependent system of stochastic velocities , i.e , , gpk models correspond to nonlinear langevin equations , since the latter provide the kac limit ( in the stratonovich interpretation of the stochastic integral ) for this class of systems .the functional dependence of the transition rates , or of the entries of the transition probability matrix on , provides new phenomena , as it emerges from the analysis of their kac limits .the second way to include nonlinearities , analogous to the mckean approach to langevin equations leads to gpk microdynamic equations which depend on the statistical characterization of the process itself ( in the present case , the system of partial probability density functions ) .this leads to the concept of nonlinear fokker - planck - kac equation ( this diction stems from the langevin counterpart ) , the dynamic properties of which can be extremely rich .the extension from a discrete number of states to a continuum of stochastic states is fairly straightforward within the formalism developed in part i ( see also the discussion in part i on the multidichotomic approach ) .moreover , the coupling of nonlinear effects with a continuum of stochastic states permits to derive the classical nonlinear boltzmann equation of the kinetic theory of gases within the gpk formalism . this result deserves particular attention as it shows , unambiguously , that the boltzmann equation admits a fully stochastic explanation . in some sense , this result completes the original kac s program in kinetic theory , originated from the article aimed at providing an extended markov model for interpreting the celebrated boltzmann equation of kinetic theory . for a discussion on extended markov modelssee . finally , the article outlines the bridge between the stochastic description of particle microdynamics based on gpk equations , and transport theory of continuous media .this connection is developed with the aid of some classical problems . in developing a transport theory from gpk microdynamicsthe role of the primitive statistical formulation of gpk processes , based on the system of partial probability densities , clearly emerges ( for a discussion see also section 2 in part i ) , and it is mapped into a corresponding system of partial concentrations / velocity fields .this part of the article is of primary interest in extended thermodynamic theories of irreversible processes , as it provides a novel way to develop these theories enforcing the assumption of finite propagation velocity for thermodynamic processes , and overcoming the intrinsic limitations of models based on the higher - dimensional cattaneo equation ( see part i for details ) .the article is organized as follows .section [ sec_2 ] develops the extensions of gpk models ( nonlinearity , continuum of stochastic states ) , presenting for each class of models a physical example .section [ sec_3 ] derives the connection ( equivalence ) between a nonlinear gpk process admitting a continuum of stochastic states and the boltzmann equation , discussing some implications of this result .section [ sec_4 ] addresses the connection between gpk microdynamics and the associated transport formalism in continua by considering several problems ranging from dynamo theory to mass and momentum balances , including a brief description of chemical reactions .the theory of gpk processes can be generalized in several different directions that provide , from one hand , a valuable system of stochastic modeling tools of increasing complexity and , from the other hand , the possibility of interpreting a broader physical phenomenology . in the remainder of this sectionwe introduce the various generalizations by considering first one - dimensional poisson - kac processes , and subsequently extending the theory to gpk processes . in order to define nonlineargpk processes it is convenient to introduce the concept of poisson fields .a poisson field , is a poisson process over the real line such that its transition rate depends on and eventually on time .if the poisson field is said to be stationary , while if depends explicitly on time is referred to as a non - stationary field .let , and let be a positive real - valued function .a nonlinear gpk process is defined via the stochastic differential equation where is a deterministic bias .the presence of a position dependent stochastic velocity , and the dependence on of the transition rate defining the poisson field , makes this model conceptually similar to the nonlinear langevin equations .we have assume that does not depend explicitly on time .this condition can be easily removed , but the generalization to time - dependent involves more lengthy calculations of the kac limit , the full development of which is left to the reader . for the process associated with eq .( [ eq8_1_1 ] ) , the partial probability density functions fully characterize its statistical properties .these quantities satisfy the balance equations \label{eq8_1_2}\end{aligned}\ ] ] and the `` diffusive '' probability flux is given by ] . in terms of normalized quantity constitutive equation for the diffusive flux becomes let , where . in the limit , constitutive equation for becomes that , substituted into the balance equation for , provides \label{eq8_1_5}\ ] ] which represents the kac limit for the nonlinear poisson - kac process considered . in terms of the original quantities and , the kac limit can be expressed equivalently as + \frac{1}{2 } \partial_x \left [ \frac{b(x ) \ , \partial_x b(x)}{\lambda(x , t ) } \ , p(x , t ) \right ] + \frac{1}{2 } \partial_x \left [ \frac{b(x ) \ , b(x)}{\lambda(x , t ) } \ , \partial_x p(x , t ) \right ] \label{eq8_1_6}\end{aligned}\ ] ] eq .( [ eq8_1_6 ] ) corresponds to an advection - diffusion equation characterized by an effective velocity and by an effective diffusivity the above - derived kac limit should be compared with the statistical description of a classical langevin equation driven by wiener fluctuations where are the increments of a one - dimensional wiener process in the time interval , to be interpreted `` a la stratonovich '' . in eq .( [ eq8_1_9 ] ) , `` '' indicates the stratonovich recipe for the stochastic integrals . the fokker - planck equation associated with eq .( [ eq8_1_9 ] ) is given by + \frac{1}{2 } \partial_x \left [ p(x , t ) \ , \partial_x d_s(x , t ) \right ] + \partial_x \left [ d_s(x , t ) \, \partial_x p(x , t ) \right ] \label{eq8_1_10}\end{aligned}\ ] ] the reason for the choice of the stratonovich rather than the ito calculus follows from the wong - zakai theorem : poisson - kac processes are stochastic dynamical systems excited by a.e .differentiable smooth perturbations , converging in the kac limit to ordinary brownian motion .according to the wong - zakai result , that in the present case corresponds to the kac limit , these processes should converge in the kac limit to the stratonovich formulation of the langevin equation ( [ eq8_1_9 ] ) , where and should coincide with and , respectively .below , we discuss this convergence that , in point of fact , is slightly more subtle than expected .two cases should be considered .case ( a ) : does not depend on .it follows from the comparison of eqs .( [ eq8_1_6 ] ) and ( [ eq8_1_10 ] ) that the kac limit of eq .( [ eq8_1_1 ] ) coincides with eq .( [ eq8_1_9 ] ) provided that and this can be viewed as a corollary of the wong - zakai theorem . in this case , the poisson - kac process is a _stochastic mollification _ of the langevin - stratonovich equation ( [ eq8_1_9 ] ) .case ( b ) : depends explicitly on .also in this case but the equivalence between the convective contributions provides the relation a physical justification of this phenomenon is addressed at the end of this paragraph .the generalization to nonlinear gpk processes in is straightforward .define a -state finite poisson field a stochastic process parametrized with respect to , attaining different possible states , such that the transition structure between the states is described by the time - continuous markov chain , where is the probability of the occurrence of at position and time . in equation ( [ eq8_1_14 ] ) , and are the entries of a symmetric transition matrix , , for any , .moreover , let us assume that represents an irreducible left - stochastic matrix function for any and .given vector valued functions , satisfying the zero - bias condition identically for , a nonlinear gpk process is described by the stochastic differential equation its statistical characterization involves partial probability density functions , satisfying the balance equations using the representation in terms of and , it is straightforward to construct a stochastic simulator of eq .( [ eq8_1_17 ] ) analogous to that defined in section 4 of part i for linear gpk processes .it is worth observing that there is a substantial difference between nonlinear poisson - kac / gpk process of the form ( [ eq8_1_1 ] ) or ( [ eq8_1_16 ] ) and the nonlinear langevin equations driven by wiener perturbations , such as eq .( [ eq8_1_9 ] ) . in the latter case ,the -dynamics does not influences the statistical properties of the stochastic wiener forcing and implies solely a modulation of the intensity of the stochastic perturbation , that depends on , via the factor as in eq .( [ eq8_1_9 ] ) .conversely , in the case of poisson - kac / gpk processes , there is a two - way coupling between the -dynamics of the poissonian perturbation , as the evolution of influences the statistics of the poissonian field , whenever the transition rate or the transition rate vector depend explicitly on and , respectively .this observation , physically explain the apparently `` anomalous '' correspondence relation ( [ eq8_1_13 ] ) , as this model does not fall within the range of application of the wong - zakai theorem .a further generalization of gpk theory is the extension to a continuous number of states .such a continuous extension is not suitable within the framework of multi - dichotomic processes discussed in section 3 of part i , and this constitutes the main shortcoming of this class of models in the applications to statistical physical problems .next , consider the one - dimensional case .let be a time - continuous markov process attaining a continuum of states belonging to a domain .its statistical description involves the transition rate kernel , which is a positive symmetric kernel given , it is possible to introduce the transition rates and the transition probability kernel the transition probability kernel possesses the following properties : ( i ) normalization , i.e. , i.e. , it is a left - stochastic kernel , and ( ii ) it is assumed that is irreducible , meaning that solely the constant function is the left eigenfunction of , associated with the frobenius eigenvalue . in other terms , the multiplicity of the frobenius eigenvalue is . indicating with ] fulfills the balance equation \label{eq8_4_19}\ ] ] taking the difference between the evolution equations of the first - order partial moments provides = - \left [ m_+^{(1)}(t ) - m_-^{(1)}(t ) \right ] + b - 2 \ , \lambda \ , \left [ m_+^{(1)}(t ) - m_-^{(1)}(t ) \right ] \label{eq8_4_20}\ ] ] asymptotically , the difference between the partial first - order moments converges towards the value which implies that where the variance of the overall probability density wave attains asymptotically a constant value equal to .the nfpk considered above describes the evolution of a nonlinear soliton traveling with constant speed and possessing a constant variance ( as can be observed from the profiles in figure [ fig19 ] panel ( b ) ) .figure [ fig20 ] depicts the comparison of numerical simulation results for and the asymptotic expression ( [ eq8_4_22 ] ) .vs for the nonlinear fpk model discussed in the main text at , for different values of .line ( a ) refers to , ( b ) to , ( c ) to .the horizontal lines correspond to the predictions of eq .( [ eq8_4_22]).,title="fig:",height=188 ] the shape of the propagating solitons depends significantly on the transition rate . for high values of , a nearly gaussian soliton propagates as expected from the kac limit , see figure [ fig19 ] panel ( b ) .however , for small values of , profiles completely different from the gaussian one can occur . this phenomenon is depicted in figure [ fig21 ] panels ( a ) and ( b ) , corresponding to and , respectively .the resulting probability density profiles of the propagating solitons , depicted in this figure , are rescaled to unit zero - th order moment , i.e. , and zero mean . ,panel ( b ) to .lines ( a ) to ( c ) refer to , respectively.,title="fig:",height=188 ] as can be observed , a nearly rectangular - shaped soliton occurs for ( panel ( a ) ) , while bimodal profiles characterize its shape for lower values of ( panel ( b ) ) .gathering together the generalizations of gpk process introduced in the previous section ( nonlinearity and continuity of stochastic states ) , we arrive at a remarkable result . to get it, we need two ingredients : ( i ) a continuous gpk process parametrized with respect to the stochastic velocity vector , and described by means of the transition kernel . letting the associated partial probability densities parametrized with respect to , and assuming for simplicity , their evolution equation is given by ( ii ) the further assuption that this continuous gpk process is a nonlinear fpk process , in which the transition kernel ) ] , the transition rate function )= \int_{{\mathcal d } } \int_{{\mathcal d } } \int_{{\mathcal d } } h({\bf b}^\prime,{\bf b}^{\prime \prime \prime } | { \bf b } , { \bf b}^{\prime \prime } )\ , p({\bf x},t;{\bf b}^{\prime \prime } ) \ , d { \bf b}^\prime \ , d { \bf b}^{\prime \prime } \ , d { \bf b}^{\prime \prime \prime } \label{eq_8b_4}\ ] ] and the transition probability kernel ) = \frac{1}{\lambda({\bf b}^\prime;[p ] ) } \ , \int_{{\mathcal d } } \int_{{\mathcal d } } h({\bf b } , { \bf b}^{\prime \prime \prime } |{\bf b}^\prime,{\bf b}^{\prime \prime } ) \ , p({\bf x},t,{\bf b}^{\prime \prime } ) \ , d { \bf b}^{\prime \prime } \ , d { \bf b}^{\prime \prime \prime } \label{eq_8b_5}\ ] ] which is a left - stochastic kernel .these quantities can be useful in the mathematical analysis of the model to assess the relaxation properties and the _ propagation of chaos _ - to quote an expression by m. kac - in the system . *the quantities ) ] define completely a stochastic simulator of the nonlinear cgpk process associated with the boltzmann equation , which can be viewed as a stochastic molecular simulator .* the derivation of the nonlinear collisional boltzmann equation from a simple stochastic model justifies , on stochastic ground , the intrinsic irreversibility associated with the boltzmann equation .in this framework , the irreversible behavior is not related with the underlying , possibly chaotic , conservative hamiltonian dynamics associated with the collision process .this claim , is strongly related to the classical zermelo s objection on the purely mechanical interpretation of the boltzmann equation based on the application of the poincar recurrence theorem , and the cgpk theory of the kinetic equation , outlined above , gives a simple answer to the zermelo s criticism . in eqs .( [ eq_8b_1])-([eq_8b_2 ] ) , hamiltonian conservative mechanics enters solely in order to specify the functional form of the kernel in order to be consistent with the conservation requirements ( as regards momentum and kinetic energy ) dictated by the assumption of elastic collisions eq ( [ eq_8b_agg2 ] ) .the assumption of a rarefied particle - gas systems enters in the linearity of )$ ] with respect to the partial density waves .as the theory of cgpk processes concerns , the functional form of the boltzmann equation resides exclusively on the markovian character of the recombination mechanism amongst the partial probability waves .* the above derivation opens up relevant issues in atomic physics associated with stochasticity and its physical meaning .the problem can be stated as follows : given the above derivation of the boltzmann equation starting from a purely stochastic model , the stochastic nature of a particle gas system is solely a mathematical result following from cgpk theory or , rather it is a manifestation of more fundamental processes underlying relaxation and irreversibility in atomic and molecular systems ?we believe that the second approach would prove to be correct , and further investigation would hopefully lead to new results on an old classical subject , in order to explain the dissipative behavior of an ensemble of identical gas molecules mutually interacting via binary collisions . * as for the elastic boltzmann equation ,the kinetic equations for granular materials , for which the collisions are no longer elastic , and an inelastic restitution coefficient is introduced , can be treated on stochastic grounds using nonlinear cgpk .similarly , it would be possible to identify stochastic models for other kinetic equations ( such as those of plasma physics ) using cgpk processes . in this respect ,solely the collisionless vlasov equation represents an exception in this stochastic paradigm .this is not surprising , as the vlasov equation is an isentropic model that can not be treated within a stochastic theory in which the entropy production occurs as a consequence of the recombination amongst the partial probability waves .the equations for the partial probability waves represent the basic archetype of transport equations derived from gpk processes .consistently with the principle of primitive variables , these equations are parametrized with respect to the state of the stochastic perturbation , expressed by the partial probability densities .this represents a major difference with respect to transport equations derived starting from langevin microdynamics driven by wiener processes . in the latter case ,due to the independence of the increments , the state of the stochastic perturbation is completely renormalized out of the associated fokker - planck equation . in this section ,we analyze in some detail the functional structure of the transport equations deriving from gpk models . to begin with ,we develop the transport equation from gpk dynamics associated with the evolution of the mean magnetic field in a solenoidal flow field ( dynamo problem ) .subsequently , transport equations for matter and momentum density in a fluid continuum are derived starting from a gpk ornstein - uhlenbeck process .finally , the theory is extended to chemical reaction kinetics .let be a solenoidal time - dependent velocity field in , and consider a gpk process admitting an isotropic kac limit , characterized by the effective diffusivity .let be the magnetic field , and consider the transport of due to the advective action of the velocity field in the case stochastic fluctuations are superimposed .this is the essence of the dynamo problem in the presence of diffusion admitting interesting astrophysical applications a stochastic microdynamic equation for this process , written in the form of a gpk process , can be expressed as where accounts for the stretching of the magnetic field induced by the velocity field , the -entry of which is given by equation ( [ eq10_1_1 ] ) represents the stochastic formulation of the magnetic dynamo problem under the assumption that the magnetic field does not influence the evolution of the flow field , corresponding to the one - way coupling approximation .the statistical description of this gpk process involves the partial probability waves , that are solution of the system of hyperbolic equations where and are the nabla - operators with respect to the - and -variables , respectively .the basic macroscopic observable in the dynamo problem is the mean magnetic field defined as and depending on the position and time .the parametrization with respect to the stochastic state suggests to introduce the auxiliary quantities obviously , from eq .( [ eq10_1_3 ] ) , after some algebra , the evolution equation for can be derived .componentwise , it reads where is the -entry of , . in the kac limit, this system of equations converges towards the solution of the parabolic equation involving the overall magnetic field defined by eq .( [ eq10_1_6 ] ) , which is the classical evolution equation for the dynamo problem in the presence of diffusion .the system of equations ( [ eq10_1_7 ] ) represents the exact transport equation for the first - order partial moments within the framework of gpk theory .the derivation of these equations does not involve any constitutive assumption , and this is the reason why we have considered this problem as the first example of undulatory transport theory .maxwell equations dictates that the magnetic field should be solenoidal . from eq .( [ eq10_1_7 ] ) it follows that where the solenoidal nature of the velocity field has been enforced .equation ( [ eq10_1_9 ] ) indicates that , if all the partial averages of , at time are solenoidal , i.e. , , then the transport model ( [ eq10_1_7 ] ) preserves this property , namely , and _ a fortiori _, for any time .the classical theory of the dynamo problem in the presence of diffusion provides that , for two - dimensional spatial problems ( ) , the -norm of the magnetic field , solution of the parabolic equation ( [ eq10_1_8 ] ) , decays exponentially in time for generic smooth time - periodic velocity fields .conversely , a positive dynamo action , i.e. , an exponential divergence with time of the -norm of the magnetic field may occur starting from .it is easy to check that for , in the case the average magnetic field possesses zero mean , this property holds also for the gpk dynamo equation ( [ eq10_1_9 ] ) .this result , stems straightforwardly from the observation that the partial fields can be expressed in terms of a family of ( scalar ) vector potentials which , in turn , are solutions of the associated advection - diffusion equations for a scalar field as shown in part ii , gpk advection - diffusion of a scalar field in the standard - map flow , the -norms of decay exponentially to zero as a function of time . therefore , the effect of gpk perturbations is essentially to modify the decay exponent ( in two - dimensional spatial problems ) with respect to the kac limit ( as in the case of the chaotic advection - diffusion problem for a scalar field addressed in part ii ) , but not the quality of stability .the three dimensional case is fully open for investigation , and there is the possibility that poissonian perturbations could modify the stability properties of the diffusive dynamo problem , determining the occurrence of a positive dynamo action , also in those cases where the corresponding parabolic model ( [ eq10_1_8 ] ) possesses all the eigenvalues with negative real part .the comparison with the analysis developed by arnold and korkina and by galloway and proctor for the abc flow would be an interesting benchmark of this hypothesis . in this paragraph we consider the structure of mass and momentum transport equation in a moving continuum as it emerges from gpk theory .a moving continuum is nothing but an extremely useful macroscopic approximation of the granularity of matter at microscale , resulting from the averaging of the local stochastic motion . according with the basic principles outlined in section 2 of part i ( principle of stochastic reality ) ,let us consider for the granular entities ( be them particles , molecules , aggregates , clusters , etc . ) , forming the continuous fluid phase , a gpk equation of motion of the form where , .equation ( [ eq_10_2_1 ] ) represents a ornstein - uhlenbeck process , in which all the microscopic `` granules '' possess equal mass , in the presence of a force field and of stochastic fluctuations of poisson - kac nature , described by a -state finite poisson process modulating a system of stochastic acceleration vectors , .let , , be the partial probability densities associated with eq .( [ eq_10_2_1 ] ) , which satisfy the balance equations where , as above , and represent the nabla operators with respect to the - and -variables , respectively . as in the classical setting of the hydrodynamic limit from kinetic schemes, we are interested in the lower - order moments of the partial densities . in the present analysis we focus exclusively on the mass and momentum densities. a complete analysis involving energy density will be developed in a forthcoming work .let , be the partial mass ( ) and momentum ( ) densities , respectively .the overall mass ( ) and momentum ( ) densities are the sum with respect to of the corresponding partial quantities from eq .( [ eq_10_2_2 ] ) it follows that satisfy the system of partial continuity equations that , once summed over , provide the overall continuity equation next , consider the partial momentum densities . multiplying eq .( [ eq_10_2_2 ] ) by and integrating over one obtains let be the comprehensive partial stress tensor , corresponding to the dyadic second - order term , that includes also the inertial contribution .the comprehensive partial stress tensor can be dissected into a partial inertial contribution , and into a partial stress tensor ( _ sensu stricto _ ) , which in turn can be developed , as in classical continuum mechanics , into a traceless stress tensor and into a compressive isotropic pressure contribution ( is the identity tensor ) , where . enforcing the above decompositions , one obtains the system of momentum balance equations , that represent the general setting of momentum transfer in the hydrodynamic limit of gpk theory for a single phase / single component moving continuum .summing eq .( [ eq_10_2_12 ] ) over , the balance equation for the overall momentum density is obtained where the overall stress tensor and pressure are simply the sum over of the corresponding partial quantities there are two main qualitative differences with respect to the classical hydrodynamic formulation of momentum transport : * the inertial term does not reduce to , but contains explicit reference ( memory ) of all the partial inertial contributions expressed by the term ; * there is an additional contribution expressed by the term accounting for the nonuniformity effects of the stochastic acceleration terms amongst the partial structures of the fluid mixture .this term obviously vanishes in the kac limit where , due to the fast recombination amongst the partial probability waves and , due to the zero - bias condition .it follows from the above observations , that in the gpk theory of hydrodynamics one can not reduce mass and momentum transport exclusively to the analysis of the overall fields and , but one is forced to solve simultaneously mass and momentum balance equations for the full system of partial field \{ , . also for the gpk mass and momentum balance equations , the concept of kac limit applies , and these equations should reduce for , keeping constant the nominal diffusivity , to the usual continuity and navier - stokes equations , upon a suitable assumption on the constitutive equations for . a discussion on the constitutive equations for the partial stress tensors , as well as a comprehensive analysis of the gpk hydrodynamics and its qualitative differences with respect to the classical approach will be developed in a forthcoming article .in this paragraph we briefly discuss the modeling of chemical reaction kinetics within the framework of gpk theory .consider the simplest case of a bimolecular elementary isothermal chemical reaction in , the rate of which , in the mean - field limit , is given by where , are the molar concentrations of the two reacting species , and is the rate coefficient independent of the concentrations but eventually function of temperature .assume that the reacting process evolves in a fluid continuum where the reacting molecules of the two species are subjected to a deterministic drift and to stochastic fluctuations expressed by means of a gpk process .consider a gpk process possessing a finite number of states .let , be the partial ( molar ) concentrations of the two reacting species parametrized with respect to the stochastic state .the overall molar concentrations are just the sum of the partial concentrations with respect to the stochastic index let be a gpk process a admitting in the kac limit an effective diffusivity . taking into account the presence of the deterministic velocity field , the gpk balance equations for the partial concentrations of the two reacting species can be expressed by .observe that the way a chemical reaction can be introduced within the gpk balance equation is not unique .the two reacting contributions , , entering the balance equations for and , respectively correspond to the assumption that the reaction rate at the space - time point depends exclusively on the actual overall concentrations of the two reacting species at .other choices are also possible . in the kac limit , eqs .( [ eq_10_3_5 ] ) converge towards the system of two parabolic equations for the overall concentrations and , in point of fact , the formal structure of continuum gpk processes provides a natural way to account for the collision efficiency and its influence on the reaction rate , in an analogous way the reactive boltzmann equation does .this is by no mean surprising , due to the equivalence between cgpk process in the presence of nfpk kernels and the non - reactive boltzmann equation developed in section [ sec_3 ] .next , we consider briefly this class of models , in the absence of a deterministic velocity field ( for simplifying the notation ) , assuming a linear model where both and do not depend on the partial density waves .the state of the stochastic perturbation is parametrized with respect to the stochastic velocity vector , attaining values in , and the bimolecular reaction ( [ eq_10_3_1 ] ) is considered under isothermal conditions .consequently , the partial concentrations , in this setting depend continuously on the stochastic velocity vector . as regards the chemical kinetic contribution to the evolution of , , two reaction kernels , are introduced , representing the fraction per unit time of colliding molecules of and , respectively , possessing stochastic velocities and which perform the reaction , i.e. , that are able to overcome the reaction activation energy .although not explicited , these kernels depend on temperature by a arrenhius - kramers factor , where is the reaction activation energy , the boltzmann constant , and the absolute temperature .stoichiometry dictates that and , moreover the reaction kernel can be assumed , for simplicity , symmetric with respect to their argument , namely taking into account the analysis developed in section [ sec_3 ] , the balance equations for , now read d { \bf b}^\prime - c_a \ , \int_{{\mathcal d } } r({\bf b},{\bf b}^\prime ) \ , c_b^\prime \ , d { \bf b}^\prime \nonumber \\ \hspace{-1.5 cm } \partial_tc_b = - { \bf b } \cdot \nabla_x c_b- \int_{{\mathcal d } } k({\bf b},{\bf b}^\prime ) \ , \left [ c_b - c_b^\prime \right ] d { \bf b}^\prime - c_b \ , \int_{{\mathcal d } } r({\bf b},{\bf b}^\prime ) \, c_a^\prime \ , d { \bf b}^\prime \label{eq_10_3_8}\end{aligned}\ ] ] where , , , , and is the transition kernel of the continuous gpk process . given a physically motivated expression for the reaction kernel , depending on the chemistry of the reactive step , the kac limit of the model , or alternatively the application of homogenization techniques in the long - term regime , provide an expression for the effective reaction coefficient , entering the mean - field model ( [ eq_10_3_2 ] ) .the flexibility of gpk models , especially in their continuous formulation , provides a direct connection between momentum transfer and the collisional efficiency of a reactive process . in a similar way ,it is possible to derive stochastic models for particle / antiparticle production / annihilation accounting for the conservation laws ( parity , energy and momentum ) of the process .the main physical issue in this case is not the annihilation contribution , which is substantially analogous to a bimolecular reaction , but the production term which intrinsically depends on the vacuum fluctuations and involves the stochastic characterization of zero point energy .the latter issue can be properly formalized in the present theory , embedding cgpk processes within the framework of the second quantization of the electromagnetic field , and its stochastic characterization .we have introduced the concept of generalized poisson - kac processes , analyzed their structural properties ( part i ) and addressed their physical implications in statistical physical , hydrodynamics and transport theory ( parts ii and iii ) . in its very essence ,a gpk process stems from the original intuition of marc kac of considering dichotomous velocity fluctuations possessing finite propagation velocity , and considers the fluctuating contribution as a transition amongst a finite number of stochastic states immersed in a markovian structure accounting for the transitions .the primitive statistical description of this process involves partial probability density functions the spatio - temporal evolution of which follows a hyperbolic dynamics , corresponding to a planar wave - motion in the presence of recombination .this is the reason why are also referred to as `` partial probability waves '' , and the resulting macroscopic processes indicated as `` undulatory transport model '' , just to mark the wave - like nature of their basic statistical descriptors .the nonlinear extension of the theory , as well as the generalization to a continuum of states , provides the natural setting for obtaining straightforwardly a stochastic derivation of the kinetic boltzmann equation .this result opens up fundamental issues on the underlying physical nature of this equation , alternative to the purely mechanical ( liouvillian ) picture . in some sense ,the stochastic derivation of the collisional boltzmann equation concludes the original kac s program in kinetic theory starting from the 1954 paper based on a markovian toy model for gas dynamics , and gives to it new impetus for a thorough exploitation of a fully stochastic formulation of the boltzmann collisional dynamics .the gpk approach towards the boltzmann equation is consistent with the zermelo objection against the fully mechanical ( hamiltonian ) derivation of the boltzmann equation which , if postulated , leads intrinsically to finite - time poincar recurrences and to the lack of a truly irreversible behavior .albeit sedated by the analysis of the order of magnitude of the recurrence time , zermelo objection persists , from the strict mathematical and logical point of view , as a bleeding wound on the connection between a mechanical ( conservative ) view of dynamics and the intrinsically irreversible nature of thermodynamics .all these issues on the foundation of boltzmann collision equation will be hopefully developed in a subsequence communication .but it is rather intuitive , that the results obtained on the connections between kinetic theory and nfpk models admit powerful implications , not only from a theoretical and thermodynamic point of view , but also in practical applications .the idea of developing a fully stochastic molecular simulator based on nfpk dynamics is not only intriguing , but also feasible and potentially computationally advantageous with respect to the existing methods .further studies will clarify and quantify the correctness of this possibility .what is also interesting to observe is that the physical concept of collision amongst molecules corresponds , in the continuous gpk setting , to a markovian recombination amongst the partial probability waves , induced by the choice of the velocity as the stochastic vector - valued variable parametrizing the states of the cgpk process .apart from this result , gpk theory provides a simple and tractable class of processes that overcome the intrinsic problems of wiener - driven stochastic models in describing the physical reality ( infinite propagation velocity ) , and permits to derive out of them new classes of transport equations in the hydrodynamic ( continuum ) limit .the main qualitative difference between wiener - driven and gpk stochastic models resides in the _ regularity issue _ : the trajectory of gpk processes , for finite values of and , are , with probability , almost everywhere smooth curves of time , possessing fractal dimension , and local hlder exponent . out of it ,finite propagation velocity follows .moreover , the kac - limit property provides the natural connection between gpk theory and the classical stochastic formulation of microdynamics based on brownian motion and wiener - driven langevin equations .this connection is not only related to the kac limit for , keeping fixed the nominal diffusivity , but also as an emergent property in the long - time regime .this justifies while brownian features can be observed at time - scales much larger than the characteristic recombination scale .the asymptotic properties of gpk processes represent a bridge between gpk theory and the statistical physics based on brownian motion and wiener - driven processes that , as said , can be regarded as an emergent feature for long timescales . * gpk processes , once extended to space - time stochastic perturbations , provide a valuable tool for approaching , on a rigorous but save way ( as singularity issues are concerned ) , spatio - temporal stochastic dynamics and field - theoretical models ( i.e. , spde ) driven by stochastic perturbations .we have outlined several simple linear examples in part ii , and the same approach can be extended to work out nonlinear models such as burgers equations , kpz model , -field stochastic quantization , etc .* the finite propagation velocity of gpk provides a safeguard in the extension of gpk processes to the relativistic case .* trajectory regularity , characterizing gpk processes , substantially simplifies all the subtleties and technicalities of stochastic ( ito , stratonovich , klimontovich etc . ) calculus , as reduces it to the simplest possible form , namely that of riemann - stieltjes integrals .there is another issue , mentioned throughout this work , that deserves further attention , as it marks a conceptual difference between wiener - driven langevin equations and the corresponding stochastic models driven by gpk processes .consider a langevin equation driven by a wiener forcing on the state variable .the wiener forcing expresses in a lumped , coarse - grained way , a manifold of small - scale perturbations resulting from the microscopic interactions , internal to the system , and from the interactions with the surrounding . as a result, the associated forward fokker - planck equation defines the evolution of the probability density function in which there is no further reference on the state of the stochastic perturbation : this makes this class of models strictly markovian .conversely , in the corresponding gpk case , the statistical description of the process still involves information about the stochastic perturbation : for this reason a system of partial probability densities , in the discrete case , , in the cgpk case , is required to describe the statistical evolution of the system .the fact that the statistical description of the process keeps memory of the state of the stochastic perturbation , so that system dynamics and stochastic perturbations can not be decoupled , other than in the kac limit , is the physical origin of the trajectory regularity of gpk processes .consequently , gpk processes are not strictly markovian , which respect to the overall probability density function , but an extended markovian property can be still established with respect to the complete probabilistic description accounting also for the state of the stochastic perturbation . the consequence of this property , as regards the transport equations in the hydrodynamic limit , is evident : a system driven by gpk fluctuations is fully described in the hydrodynamic limit by a family of partial mass and momentum densities , , and similarly for the other thermodynamic quantities , consistently with the retained information on the state of the stochastic perturbation .this , namely the partial - wave approach towards the coarse - graining of the microdynamic equation of motion , will be analyzed in future works . in any case, it represents a novel and promising alternative to the existing higher - moment expansions , that starting from the well - known grad s 13-moment approach , have been developed in the current literature on kinetic theory and statistical mechanics .all the higher - moment expansions , beyond the classical 5-mode approach associated with the collisional invariants , suffer the intrinsic _ vulnus _ , as regards the lack of positivity , that can be easily interpreted within the pawula theorem .in this framework , the use of gpk theory , and of its implications in the definition of the hydrodynamic limit , represents not only a new way for approaching the coarse - graining of microscopic dynamics towards the hydrodynamic equations for a continuum , but also the stochastic background for the development of a rigorous , stochastically consistent , formulation of the extended thermodynamic theories of irreversible processes , initiated with the works by mller and ruggieri , and subsequently elaborated by jou , lebon , casas - vazquez and many others , aimed at generalizing the classical de - groot mazur theory of irreversible processes , by introducing a more general definition of the thermodynamic state variables out of equilibrium in order to include the contribution of the fluxes .giona m , brasiello a and crescitelli s 2016 stochastic foundations of undulatory transport phenomena : generalized poisson - kac processes - part i basic theory , submitted to _ j. phys .giona m , brasiello a and crescitelli s 2016 stochastic foundations of undulatory transport phenomena : generalized poisson - kac processes - part ii irreversibility , norms and entropies , submitted to _ j. phys .zwanzig r 1973 _ j. stat .phys . _ * 9 * 215 van kampen n g 1981 _ j. stat. phys . _ * 25 * 431 mckean h p 1966 _ proc .sci . _ * 56 * 1907 frank t d 2010 _ nonlinear fokker - planck equations _( berlin : springer verlag ) balescu r 1975 _ equilibrium and nonequilibrium statistical mechanics _ ( new york : j. wiley & sons ) mischler s and mouhot c 2013 _ invent . math ._ * 193 * 1 michler s 2013 kac s chaos and kac s program , _ arxiv preprint _arxiv:1311.7544 wennberg b and wondmagegne y 2006 _ j. stat ._ * 124 * 859 carlen e , mustafa d and wennberg b 2015 _ j. stat ._ 158 1341 kac m 1956 foundations of kinetic theory , in _ proc .3rd berkeley symp ._ , j. neyman ( ed . ) , univ of california , vol . 3 , 171 giona m , brasiello a and crescitelli s 2016 markovian nature , completeness , regularity and correlation properties of generalized poisson - kac processes , submitted to _ j. stat ._ mller i and ruggeri t 2013 _ rational extended thermodynamics _( berlin : springer verlag ) jou d , casas - vazquez j and lebon g 1996 _ extended irreversible thermodynamics _( berlin : springer verlag ) jou d , casas - vazquez j and lebon g 1999 _ rep .phys . _ * 62 * 1035 arnold v i and khesin b a 1999 _ topological methods in hydrodynamics _( new york : springer science & business media ) chil m and childress s 2012 _ topics in geophysical fluid dynamics : atmospheric dynamics , dynamo theory , and climate dynamics _( new york : springer science & business media ) wong e and zakai m 1965 _ int . j. eng .* 3 * 213 wong e and zakai m 1965 _ ann .* 36 * 1560 shimizu h and yamada t 1972 _ prog .* 47 * 350 zermelo e 1896 _ annalen der physik _ * 57 * 485 zermelo e 1896 _ annalen der physik _ * 59 * 783 villani c 2006 _ j. stat .phys . _ * 124 * 781 parker e n 1979 _ cosmological magnetic fields : their origin and their activity _ ( oxford : clarendon press ) busse f h 1976 _ phys . earth .planet int . _* 12 * 350 proctor m r e and gilbert a d ( eds . ) 1994 _ lectures on solar and planetary dynamos _( cambridge : cambridge university press ) arnold v i and korkina e i 1983 _ moskov .vestnik ser .* 1 * 43 galloway d j and proctor m r e 1992 _ nature _ _ 356 _ 691 de groot s r and mazur p 1984 _ non - equilibrium thermodynamics _( new york : dover publ . ) naumann w 1985 _ physica a _ * 132 * 339 milonni p w 1994 _ the quantum vacuum : an introduction to quantum electrodynamics _ ( boston : academic press ) boltzmann l 1886 annalen der physik * 57 * 773 chandrasekhar s 1943 _ rev .phys . _ * 15 * 85 .giona m 2016 covariance and spinorial statistical description of simple relativistic stochastic kinematics , in preparation giona m 2016 relativistic analysis of stochastic kinematics , in preparation grad h 1949 _ comm .pure appl . math ._ * 2 * 331 pawula r f 1967 _ phys .rev . _ * 162 * 186
|
this third part extends the theory of generalized poisson - kac ( gpk ) processes to nonlinear stochastic models and to a continuum of states . nonlinearity is treated in two ways : ( i ) as a dependence of the parameters ( intensity of the stochastic velocity , transition rates ) of the stochastic perturbation on the state variable , similarly to the case of nonlinear langevin equations , and ( ii ) as the dependence of the stochastic microdynamic equations of motion on the statistical description of the process itself ( nonlinear fokker - planck - kac models ) . several numerical and physical examples illustrate the theory . gathering nonlinearity and a continuum of states , gpk theory provides a stochastic derivation of the nonlinear boltzmann equation , furnishing a positive answer to the kac s program in kinetic theory . the transition from stochastic microdynamics to transport theory within the framework of the gpk paradigm is also addressed .
|
the essence of programming is to manipulate data structures through procedures .objects which group ( _ encapsulate _ ) data and the procedures ( _ methods _ ) to use them is at the basis of object - oriented programming .when the decomposition into objects is adequate , object programming is usually considered as leading to more natural and safer programs as it reduces the distance between the programmer s intuition and the program structure .for example , let us consider a physical system made of atoms .in an object - oriented approach , an atom is quite naturally represented by an object containing the atom state ( e.g. position , velocity , and mass ) and the procedures to transform the atom state ( e.g. computing the next position ) .some systems to simulate physics are coded with objects as lammps , programmed in c++ . as more traditional systems programmed in fortran ( e.g. dl_poly ) ,their central structure basically consists in arrays storing data , and loops to process the array elements in turn .a general issue with such structures is the difficulty to add ( and to remove ) data , or objects , during the course of a simulation ( _ dynamically _ ) .some molecular simulation systems are able to deal with dynamic creation of chimical bonds ( this is for example the case of lammps , in which the reaxff potential is implemented ) , but , to our knowledge , no simulation system is able to deal with the dynamic creation / destruction of general components ( e.g. atoms ). the dynamic creation / destruction of components can be useful to model multi - scale systems , in which the scale of sub - systems is allowed to change during execution .let us consider , a molecule simulated at the all - atom ( aa ) scale ; when the molecule is isolated from the others , il may be possible to switch its scale to coarse - grained ( cg ) and thus to simulate the molecule more efficiently . the inverse scale change may be mandatory when molecules become so close that their interactions must be described at the lowest aa level . in both cases ,a change of scale can be seen as the _ replacement _ of the molecule by a new one .for example , the aa to cg change of scale consists in the creation of a new cg molecule , simultaneous with the destruction of the old aa molecule . in this text, we propose an approach to the implementation of physical simulations in which the dynamic creation or destruction of components can be simply and naturally expressed .the objective is the design of a molecular dynamics ( md ) system allowing multi - scale modeling .we use a programming approach , called reactive programming ( rp ) , with a generalized notion of object : an object encapsulates not only its data and methods to manipulate them , but also its _ behaviour _ which can be used to code its interactions with the other objects . in this context, a simulation is structured as an assembly of interacting objects whose behaviours are run in a coordinated way , by an execution machine able to dynamically create and run new objects , and to destroy others .the paper is structured as follows : section [ section : rationale ] justifies the use of rp for the implementation of physical simulations .section [ section : reactive - programming ] describes rp in general and more precisely the sugarcubes framework .md is described in section [ section : molecular - dynamics ] .the implementation of the md simulation system is described in section [ section : md - system ] .related work is considered in section [ section : related - work ] , and section [ section : conclusion ] concludes the paper , giving several tracks for future work .when it is not possible , while considering a physical system , to find exact analytical solutions describing its dynamics , the numerical simulation approach becomes mandatory .numerical simulations are based , first , on a discretization of time and second , on a stepwise _ resolution method _ implementing an integration algorithm .the possibility to get analytical solutions is basically limited by complexity issues : complex systems can not usually be analytically solved .this renders numerical simulations an important topics for physics .[ [ parallelism . ] ] parallelism .+ + + + + + + + + + + + at the logical level , it is often the case that the processing of a complex system is facilitated by decomposing it into several sub - systems linked together .the sub - systems can thus be considered as independent components , running and interacting in a coordinated way .recast in the terminology of informatics , one would say that the sub - systems are run _ in parallel _ in an environment where they share the same notion of time . here , we are not speaking of real time but of a _ logical time _, shared by all the sub - systems . in this approach ,the steps of the numerical resolution method used to simulate the system have to be mapped onto the logical time ; we shall return on this subject later .since we are assuming time to be logical , we should also qualify parallelism as being logical : parallelism is not introduced here to accelerate execution ( for example , by using several processors ) but to describe a complex system as a set of parallel entities .the use of real - parallelism ( offered by multiprocessor and multicore machines ) to accelerate simulations is of course a major issue , but we think that it is basically a matter of _ optimisation _ , while the use of logical parallelism is basically a matter of _ expressivity_. [ [ determinism . ] ] determinism .+ + + + + + + + + + + + simulations of physical systems must verify a strong and mandatory constraint : no energy should be either created or destroyed during the simulation of a closed system ( without interaction with the external world ) . in other words , _ total energymust be preserved _ during the simulation process .this is a fundamental constraint : it corresponds to the _ reversibility in time _ of the newton s laws which is at the basis of classical physics .this constraint can be formulated differently : classical physics is _deterministic_. in informatics , this has a deep consequence : as physical simulations should be deterministic , one should give the preference to programming languages in which programs are _deterministic by construction_. note however , that parallelism and determinism do not usually go well together ; there are very few programming languages which are able to provide both ; we shall return on this point later .[ [ broadcast - events . ] ] broadcast events .+ + + + + + + + + + + + + + + + + in addition to time reversibility , classical physics rests on a second fundamental assumption : forces ( gravitation , electrostatic and inter - atomic forces ) are instantaneously transmitted .instantaneity in our case should accommodate with the presence of the logical time ; actually , instantaneity means that action / reaction forces ( third newton s law ) should always be exerted during the same instant of the logical time .it is in the nature of classical physical forces to be broadcast everywhere . in informatics terms , forces correspond to information units that are instantaneously broadcast to all parallel components .this vision of forces actually identifies forces with _ instantaneously broadcast events _ ; we shall see that this notion of instantaneously broadcast event exists in the so - called synchronous reactive formalisms , which thus appear to be good candidates to implement physical simulations .[ [ modularity . ] ] modularity .+ + + + + + + + + + + it may be the case that some components of a system have to be removed from the system because their simulation is no more relevant ( imagine for example an object whose distance from the others becomes greater than some fixed threshold , so that its contribution may be considered as negligible ) .destruction of components is usually not a big issue in simulations : in order to remove an object from the global system , it may be for example sufficient to stop considering it during the resolution phase .dually , in some situation , new components may have to be created , for example in chemistry where chemical bonds linking two atoms can appear in the course of a simulation .dynamic creation is more difficult to deal with than destruction ; for example , a new created object must be introduced only at specific steps of the resolution method , in order to avoid inconsistencies . in informatics terms , both dynamic destruction and dynamic creation of parallel components should be possible during the course of simulations .this possibility is often called _ modularity _ : in a modular system , new components can appear or disappear during execution , without need to change the other components .note that the notion of broadcast event fits well with modularity as the introduction of a new component listening or producing an event , or the removal of an already existing one , does not affect the communication with the other parallel components .[ [ hybrid - systems . ] ] hybrid systems .+ + + + + + + + + + + + + + + there exist _ hybrid _ physical systems which mix continuous and discrete aspects .for example , consider a ball linked by a string to a fixed pivot and turning around the pivot ( continuous aspect ) ; one can then consider the possibility for the string to be broken ( discrete aspect ) .numerical simulations of hybrid systems are more complex than those of standard systems : in the previous example , the string component has to be removed from the simulation when the destruction of the string occurs , and the simulation of the ball has to switch from a circle to a straight line . in this respect ,a hybrid system can be seen as gathering several related systems ( for example , the system where the ball is linked to the pivot , and the system where it is free ) ; then , the issue becomes to define when and how to switch from one system to another .note that the simulation of hybrid systems is related to modularity : in the previous example , one can consider that the breaking of the string entails on the one hand the destruction of both the string component and the circular - moving ball , and on the other hand the _ simultaneous creation _ of a straight - moving new ball appearing at the position of the old one .[ [ resolution - method . ] ] resolution method .+ + + + + + + + + + + + + + + + + + let us discuss now the relation between logical time and the discretized time of the resolution method .we shall call _ instant _ the basic unit of the logical time ; thus , a simulation goes through a first instant , then a second , and so on , until termination .the only required property of instants is convergence : all the parallel components terminate at each instant .execution of instants does not necessarily take the same amount of real time ; actually , real time becomes irrelevant for the logic of simulations : the basic simulation time is the logical time , not the real time .the numerical resolution method works on a time discretized in _ time - steps _ during which forces integration is performed according to newton s second law .typically , in simulations of atoms , time - steps have a duration of the order of the femto - second .several steps of execution may be needed by the resolution algorithm to perform one time - step integration ; for example , two steps are needed by the velocity - verlet integration scheme to integrate forces during one time - step : positions are computed during the first step , and velocities during the second . actually , a quite natural scheme associates _ two _ instants with each time - step : during the first instant , each component provides its own information ; the global information produced during the first instant is then processed by each component during the second instant .note that such a two - instant scheme maps quite naturally to the velocity - verlet method : each step of the resolution method is performed during one instant .[ [ multi - time - aspects . ] ] multi - time aspects .+ + + + + + + + + + + + + + + + + + + the use of the same time - step during the whole simulation is not mandatory : one calls _ multi - time _ a system in which the time - step of the resolution method is allowed to vary during the simulation .the change of time - step can be _ global _ , meaning that it concerns all the objects present in the simulation .this can be helpful for example to get a more accurate simulation when a certain configuration of objects is reached ( for example , when objects become confined in a certain volume ) .alternatively , the change of time - step can be _ local _, i.e. concerning only certain objects , but not all .this means that different components of the same simulation are simultaneously simulated using different time - steps. this could be the case for a system in which _ diffusion _ aspects occur ; in such a system , objects in some regions are separated by large distances and evolve freely , simulated with large time - steps , while in other regions , objects are closely interacting and should thus be simulated using smaller time - steps .we will return later on a situation of this kind .we shall call such systems _ multi - time , multi - step _ systems ( mtms systems , for short ) .a major interest of mtms is that loosely - coupled objects ( with rare interactions ) can be simulated during long time periods . in this text, we consider the reactive programming ( rp ) approach to simulate physical systems .the choice of rp is motivated by the fact that rp genuinely offers logical parallelism , instantaneously broadcast events , and dynamic creation / destruction of parallel components and events .moreover , we choose a totally deterministic instance of rp , called sugarcubes , based on the java programming language : indeed , in sugarcubes , programs are deterministic by construction .to illustrate our approach , we shall consider the implementation of a mtms simulation system of molecular dynamics ( md ) in the context of java , with the java3d library for 3d visualisation .reactive programming ( rp ) offers a simple framework , with a clear and sound semantics , for expressing logical parallelism . in the rp approach , systems are made of parallel components that share the same _instants_. instants thus define a _ logical clock _ , shared by all components .parallel components synchronise at each end of instant , and thus execute at the same pace . during instants, components can communicate using _instantaneously broadcast events _ , which are seen in the same way by all components .there exists several variants of rp , which extend general purpose programming languages ( for example , reactivec which extends c , and reactiveml which extends the ml language ) . among these reactive frameworksis sugarcubes , which extends java . in sugarcubes ,the parallel operator is very specific : it is totally deterministic , which means that , at each instant , a sugarcubes program has a unique output for each possible input .actually , in sugarcubes parallelism is implemented in a sequential way . due to its `` determinism by construction '', we have choosen to use the sugarcubes framework to implement the md system ; we are going to describe sugarcubes in the rest of the section .the two main sugarcubes classes are instruction and machine .instruction is the class of reactive instructions which are defined with reference to instants , and machine is the class of reactive machines which run reactive instructions and define their execution environment .the main instructions of sugarcubes are the following ( their names always start by the prefix sc ) : * sc.nothing does nothing and immediately terminates .* sc.stop does nothing and suspends the execution of the running thread for the current instant ; execution will terminate at the next instant .* sc.seq ( inst1,inst2 ) behaves like inst1 and switches immediately to inst2 as soon as inst1 terminates .* sc.merge ( inst1,inst2 ) executes one instant of instructions inst1 and inst2 and terminates if both inst1 and inst2 terminate .execution always starts by inst1 and switches to inst2 when inst1 either terminates or suspends . *sc.loop ( inst ) executes cyclically inst : execution of inst is immediately restarted as soon as it terminates .one supposes that it is not possible for inst to terminate at the same instant it is started ( otherwise , one would get an _ instantaneous loop _ which would cycle forever during the same instant , preventing thus the reactive machine to detect the end of the current instant ) . *sc.action ( jact ) runs the execute method of the java action jact ( of type javaaction ) ( this can happen several time , as the action can be in a loop ) .* sc.generate ( event , value ) generates event , with value as associated value , and immediately terminates .* sc.await ( event ) terminates immediately if event is present ( i.e. it has been previously generated during the current instant ) , otherwise , execution is suspended waiting either for the generation of event or for the end of the current instant , detected by the reactive machine .* sc.callback ( event , jcall ) executes the java callback jcall ( of type javacallback ) for each value generated with event during the current instant . in order not to losepossibly generated values , the execution of the instruction lasts during the whole instant and terminates at the next instant . *sc.until ( event , inst ) executes inst and terminates either because inst terminates , or because event is present . the sequence and merge operatorsare naturally extended to more than two branches ; for example sc.seq ( i1,i2,i3 ) is the sequence of the three instructions i1,i2,i3 .a reactive machine of the class machine runs a program ( of type program ) which is an instruction ( initially sc.nothing ) .new instructions added to the machine are put in parallel ( merge ) with the previous program .addings of new instructions do not occur during the course of an instant , but only at beginnings of instants .basically , a machine cyclically runs its program , detects the end of the current instant , that is when all branches of merge instructions are all terminated or suspended , and then goes to the next instant .note that the execution of an instruction by a machine during one instant can take several phases : for example , consider the following code , supposing that event e is not already generated : .... sc.merge ( sc.await e , sc.generate ( e , null ) ) .... execution switches to the await instruction ( line 2 ) which is suspended , as e is not present .then , execution switches to the generate instruction ( line 3 ) , which produces e and terminates .the executing machine detects that execution has to be continued , because one branch of a merge instruction is suspended , awaiting an event which is present .thus , the await instruction is re - executed , and it now terminates , as e is present .the merge instruction is also now terminated .the execution of a program by a machine is totally deterministic : only one trace of execution is possible for a given program .the execution of sugarcubes programs is actually purely sequential : the parallelism presently offered by sugarcubes is a logical one , not a real one ; the issue of real parallelism is considered in sec . [ section : conclusion ] .numerical simulation at atomic scale predicts system states and properties from a limited number of physical principles , using a numerical resolution method implemented with computers . in molecular dynamics ( md ) systems are organic molecules , metallic atoms , or ions .the goal is to determine the temporal evolution of the geometry and energy of atoms . at the basis of mdis the classical ( newtonian ) physics , with the fundamental equation : where is the force applied to a particle of mass and is its acceleration ( second derivative of the variation of the position , according to time ) .a _ force - field _ is composed of several components , called_ potentials _ ( of bonds , valence angles , dihedral angles , van der waals contributions , electrostatic contributions , _ etc_. ) and is defined by the analytical form of each of these components , and by the parameters caracterizing them .the basic components used to model molecules are the following : * atoms , with 6 degrees of freedom ( position and velocity ) ; * bonds , which link two atoms belonging to the same molecule ; a bond between two atoms tends to maintain constant the distance .* valence angles , which are the angle formed by two adjacent bonds et in a same molecule ; a valence angle tends to maintain constant the angle .a valence angle is thus concerned by the positions of three atoms . * torsion angles ( also called _ dihedral angles_ ) are defined by four atoms consecutively linked in the same molecule : is linked to , to , and to ; a torsion angle tends to priviledge particular angles between the planes and .* van der waals interactions apply between two atoms which either belong to two different molecules , or are not linked by a chain of less than three ( or sometimes , four ) bonds , if they belong to the same molecule .they are pair potentials .all these potentials depend on the nature of the concerned atoms and are parametrized differently in specific force - fields .molecular models can also consider electrostatic interactions ( coulomb s law ) which are pair potentials , as van der waals potentials are ; their implementation is close to van der waals potentials , with a different dependence to distance . intra - molecular forces ( bonds , valence angles , torsion angles ) as well as inter - molecular forces ( van der waals ) are conservative : the work between two points does not depend on the path followed by the force between these two points .thus , forces can be defined as derivatives of scalar fields . from now on, we consider that potentials are scalar fields and we have : where denotes the coordinates of the point on which the force applies , and is the potential from which the force derives .the precise definition of the application of forces according to a specific force - field ( namely , the opls force - field ) is described in detail in , from which we have taken the overall presentation of md .we now describe the rationals for the choice of rp to implement md .the choice of rp , and more specifically of sugarcubes , is motivated by the following reasons : * md systems are composed of separate , interacting components ( atoms and molecules ) .it seems natural to consider that these components execute in parallel . in standard approaches, there is generally a `` big loop '' which considers components in turn ( components are placed in an array ) .this structuration is rather artificial and does not easily support dynamic changes of the system ( for example , additions of new components or removals of old ones , things that one can find in modeling chemical reactions ) .* in md simulations , time is discrete , and the resolution method which is at the heart of simulations is based on this discrete time . in rp , time is basically discrete , as it is decomposed in instants . thus , rp makes the discretisation of time which is at the basis of md very simple . *md is based on classical ( newtonian ) physics which is deterministic .the strict determinism of the parallel operator provided by sugarcubes reflects the fundamental determinism of newtonian physics . at implementation level, it simplifies debugging ( a faulty situation can be simply reproduced ) . at the physical level, it is mandatory to make simulations reversible in time . * in classical physics, interactions are instantaneous which can be quite naturaly expressed using the instantaneously broadcast event notion of rp . in conclusion ,the use of rp for md simulations is motivated by its following characteristics : modularity of logical parallelism , intrinsic discretisation of time due to instants , strict determinism of the parallel operator , instantaneity of events used to code interactions .let us now consider the use of rp to implement molecular dynamics .a molecular system consists in a set of molecules , each molecule being made of atoms , bonds , valence angles and torsion angles . in the approach we propose , the molecule components ( atoms , bond , angles ) are _ programs _ that are executed under the supervision of another main program called a _reactive machine_. the reactive machine is in charge of executing the components in a coordinated way , allowing them to communicate through _ events_. events are broadcast to all the components run by the reactive machine , that is , all components always `` see '' an event in the same way : either it is present for all components if it is generated by one of them , or it is absent for all components if it is not generated during the instant .all events are reset to absent by the reactive machine at the beginning of each instant .values can be associated with event generations . in order to process the values generated with an event, a component has to wait during the whole instant , processing the values in turn , as they are generated . the reactive machine proceeds in instants : the first instant is executed , then the second , and so on indefinitely .all components ( atoms , bonds , etc ) in the machine are run at each instant and there is an implicit synchronization ( synchronization barrier ) of all the components at the end of each instant .in this way , one is sure that all component have finished their reaction for the current instant and have processed all the generated events and all their values before the next instant can start . basically , this mode of execution is synchronous parallelism .the steps of the resolution method ( velocity - verlet ) are identified with the instants of the reactive machine .the positions of atoms are computed during one step of the resolution method , and the velocities during the next step .actually , at each instant , atoms generate their position and collect the various forces exerted on them ( by bonds , angles , etc ) .the new positions are computed from previously collected information at even instants and the new velocities are computed at odd instants , following the two - step scheme of the velocity - verlet numerical resolution method .note that the new positions and velocities are computed by the atom itself : we say that they are parts of the atom _behavior_. strictly speaking , an atom is a structure that encapsulates data ( in particular , position and velocity ) together with a behavior which is a program intended to be run by the reactive machine in which the atom is added .the good programming practice is that the atom s behavior is the only component that should access the atom s data .this discipline entails the absence of time - dependent errors . as direct access to the atom s datais unwilled , events are the only means for a component to influence an atom . for example , in order to apply a force to an atom , a component generates an event whose value is the force ; the atom should wait for the event and process the generated values ; in this way , the atom is able to process the force applied to it .the constuction of molecules is a program whose execution adds the molecule components into the reactive machine . the main steps to simulate a molecular system are : 1 ) define a reactive machine ; 2 ) run a set of molecules in order to add them in the machine ; 3 ) cyclically run the machine . in the rest of this section, we give a brief overview of the various programs that are used to build a simulation .note that these are small pieces of code , that we hope to be natural and easily readable .this depart from standard md system descriptions , which are usually decomposed in procedures whose chaining of calls is poorly specified . in our approach ,the scheduling of the various sub - programs is made clear and unambiguous .we shall first describes ( generic and specific ) atoms , then intra - molecular components ( bonds and angles ) .we will also consider inter - molecular interactions .then , we will explain how molecules are built from the previous atoms and components .an atom cyclically collects the constraints issues from bonds , valence angles , and dihedrals , then computes one step of the resolution method , and finally visualizes itself .this behavior can be preempted by a kill signal ( generated for example when the molecule to which the atom belongs is destroyed ) .it is coded by the following sugarcubes program : .... sc.until ( killsignal , sc.loop ( sc.seq ( collection ( ) , sc.action ( new resolution ( this ) ) , sc.action ( new paint3d ( this ) ) ) ) ) .... a constraint is a force that is added to the atom .the constraints are received as values of a specific event associated with the atom ( generation of this event is considered in [ subsec : components ] ) .the collection of constraints is performed by a program which is returned by the following function collection : .... program collection ( ) { return sc.callback ( constraintsignal , new collectconstraints ( this ) ) ; } .... the collectconstraints java callback is defined by : .... public class collectconstraints implements javacallback { final atom me ; public void execute ( final reactiveengine _ , final object args ) { vector3d f = ( vector3d)args ; utils.add ( me.force,f ) ; } public collectconstraints ( atom me ) { this.me = me ; } } .... vector3d is the type of 3d vectors .the class utils provides several methods to deal with vectors : vect creates a vector between two atoms ; normalize normalizes a vector ( same direction , but unit length ) ; sum is the vector addition ; perp is the cross - product of vectors ; opposite defines the opposite vector ; finally , extprod multiplies a vector by a scalar .the addition `` in place '' utils.add ( x , y ) is equivalent to x = utils.sum ( x , y ) .we have choosen to define the collection of constraints as a function ( and not to inline its body in the atom behavior ) to allow specific atoms to redefine it ( actually , to extend it ) for their specific purpose ; this is considered in [ subsub : spec - atom ] .action resolution performs the resolution method for the atom ; it is described in [ subsub : resolution ] .paint3d asks for the repainting of the atom ; for the sake of simplicity , we do not consider it here .the resolution method used is the _ velocity - verlet _method .let * r * be the position ( depending of the time ) of an atom , * v * its velocity , and * a * its acceleration .the _ velocity - verlet _ method is defined by the following equations , where is a time interval : implementation proceeds in two steps : 1 .compute the velocity at half of the time - step , from previous position and acceleration , by : + + use the result to compute the position at full time - step by : 2 .get acceleration from forces applied to the atom , and compute velocity at full time - step using the velocity at half time - step by : in order to allow dynamic introduction of new molecules in the system , they should only be introduced at instants corresponding to the same step of the resolution method .note that , otherwise , the processing of lj forces between atoms belonging to two distinct molecules could be asymetric , which could introduce fake energy in the system .one choses to introduce molecules , and thus atoms , only at even instants .the _ velocity - verlet _ resolution is coded by the following class resolution : .... public class resolution implements javaaction { final atom atom ; boolean started = false ; final vector3d acceleration = new vector3d ( ) ; public resolution ( atom atom ) { this.atom = atom ; } public void execute ( final reactiveengine _ ) { double dt = atom.molecule.context.timestep ; boolean eveninstant = ( 0 = = atom.workspace.instant % 2 ) ; if ( ! started & & !eveninstant ) return ; else started = true ; if ( dt != 0 ) { if ( eveninstant ) step1 ( dt ) ; else step2 ( dt ) ; } atom.resetforce ( ) ; } void step1 ( double dt ) { utils.add ( atom.velocity,utils.extprod ( 0.5*dt , acceleration ) ) ; utils.add ( atom.position,utils.extprod ( dt , atom.velocity ) ) ; } void step2 ( double dt ) { utils.extprod ( acceleration,1/atom.mass , atom.force ) ; utils.add ( atom.velocity,utils.extprod ( 0.5*dt , acceleration ) ) ; } } .... the control of the instant at which atom resolution is started is done at lines 12 - 14 .equation [ verlet : velocity1 ] is coded at line 22 .[ verlet : position ] is then coded at line 23 ( it uses the previous atom position ) .the acceleration of the atom is computed from the force exerted on it ( second newton s law ) at line 32 .then , eq . [ verlet : velocity2 ] is computed at line 28 .note that , for all atoms , the forces computed during an odd instant are determined from the positions computed during the previous even instant .we now consider specific atoms , e.g. carbon atoms , for which we have to deal with lj interactions .the collection function is extended for this purpose .a specific event is defined for each kind of atom , on which atoms signal their existence . in this way, an atom can collect all the signaling events and compute the forces induced by the lj interactions with the other atoms .the collection function of a carbon atom is for example defined by : .... program collection ( ) { return sc.seq ( sc.generate ( csignal , this ) , sc.merge ( super.collection ( ) , collectlj ( csignal , new ljpotential ( ljc_c ) ) , collectlj ( hsignal , new ljpotential ( ljc_h ) ) , collectlj ( osignal , new ljpotential ( ljc_o ) ) ) ) ; } .... note that this definition actually extends the previous collection method of standard atom ; this method continues to be called ( super.collection ( ) ) but is now put in parallel with the specific treatments of lj interactions .the collection of the interactions corresponding to a specific kind of atoms is coded by : .... program collectlj ( identifier signal , potential potential ) { return sc.callback ( signal , new collectinteractions ( potential , this ) ) ; } .... the collectinteractions callback applies the computeforce method of the potential parameter to all the atoms ( except itself ) which signal their presence through the parameter signal , and adds the obtained force to the previously collected forces .we now consider the way intra - molecular forces are produced and applied to atoms .the application of forces to atoms from a potential is defined in . here , we shall only consider bonds which are the simplest components .the treatment of the others components ( valence and torsion angles ) is very similar .htb ] a _ harmonic bond potential _ is a scalar field which defines the potential energy of two atoms placed at distance as : where is the strength of the bond and is the equilibrium distance ( the distance at which the force between the two atoms is null ) .we thus have : bonds are coded by the class harmonicbond which has the following behavior : .... sc.loop ( sc.seq ( sc.action ( new controllength ( ) ) , sc.generate ( first.constraintsignal,fa ) , sc.generate ( second.constraintsignal,fb ) , sc.action ( new paint3d ( this ) ) , sc.stop ( ) ) ) .... the controllength action is called at each instant , to determine the force to be applied to the two atoms linked by the bond .the application of forces is realized through the constraintsignal of the two atoms .the applied forces are the values generated with these events .note the presence of the stop statement to avoid an instantaneous loop ( which would produce a warning message at each instant ) .the controllength action sets the force field of class harmonicbond and is defined by : .... public class controllength implements javaaction { public void execute ( final reactiveengine _ ) { double dist = utils.distance ( a , b ) ; double diff = dist - length ; energy = strength * diff * diff ; dudr = 2.0 * strength * diff ; vector3d v12 = utils.vect ( a , b ) ; v12.normalize ( ) ; utils.extprod ( fa , dudr , v12 ) ; utils.extprod ( fb ,- dudr , v12 ) ; } } .... eq . [ bond : potential ] is coded at line 7 , and eq .[ bond : force ] at line 8 .the force to be applied to the first atom is computed at line 11 ( the force to be applied to the second is the opposite ) .we are considering molecules made of carbon and hydrogen atoms ( linear alkane ) , as shown on fig .[ figure : carbonchain ] .the two extremal carbon atoms have three hydrogen atoms attached to them , while the others have two .the number of carbon atoms is a parameter . [ !htb ] these molecules are coded by the class carbonchain . the following method builds a carbon chain with cnum carbon atoms : .... public void build ( ) { buildbackbone ( ) ; addtop ( ) ; for ( int k = 1 ; k < cnum-1 ; k++ ) addh2 ( k ) ; addbottom ( ) ; createbonds ( ) ; createangles ( ) ; createdihedrals ( ) ; } .... the backbone of carbon atoms is built by the call to buildbackbone .methods addtop and addbottom add 3 hydrogen atoms to the extremities of the molecule , and addh2 adds 2 hydrogens to each carbon , except the extremities .the molecule components are created by the 3 methods createbonds , createangles , and createdihedrals .the crucial point is that created molecules have an energy which is minimal .minimality is obtained by placing atoms at positions compatibles with the potentials of the molecule components .let us consider how this is done for the two hydrogens attached to each carbon , except the extremities .one first defines the ( equilibrium ) length lch of bonds between carbon and hydrogen atoms , and the ( equilibrium ) valence angle ahch between two hydrogens and one carbon atoms . .... double lch = bondc_h[1 ] ; double ahch = angleh_c_h[1 ] ; double cos = lch * math.cos ( ahch /2 ) ; double sin = lch * math.sin ( ahch /2 ) ; .... the addh2 method is defined by : .... void addh2 ( int k ) { atom a = backbone[k-1 ] ; atom b = backbone[k ] ; atom c = backbone[k+1 ] ; vector3d ba = utils.vect ( b , a ) ; vector3d bc = utils.vect ( b , c ) ; vector3d p = utils.normalize ( utils.sum ( ba , bc ) ) ; vector3d n = utils.normalize ( utils.perp ( ba , bc ) ) ; vector3d u = utils.extprod ( -cos , p ) ; vector3d v = utils.extprod ( -sin , n ) ; vector3d w = utils.sum ( u , v ) ; vector3d q = utils.sum ( u , utils.opposite ( v ) ) ; atom h1 = new h ( this , utils.sum ( b.position,w),b.velocity ) ; atom h2 = new h ( this , utils.sum ( b.position,q),b.velocity ) ; others [ k ] = new atom [ 2 ] ; others [ k][0 ] = h1 ; others [ k][1 ] = h2 ; } .... atoms a , b , c are three successive carbon atoms , and b is the carbon on which two hydrogens have to be attached . two hydrogens atoms h1 and h2 are created and placed at their correct equilibrium positions . the two hydrogens are made accessible by the others array of b. by this construction , the two planes h1bh2 and abc are orthogonal , the angle h1bh2 is equals to ahch , and the distances h1b and h2b are both equal to lch . we now consider the creation of bonds . for the sake of simplicity , we do not consider the other components , which are processed in a similar manner .bonds are created by the following method : .... void createbonds ( ) { for ( int k = 0 ; k < cnum - 1 ; k++ ) { new harmonicbond ( this , backbone[k],backbone[k+1],bondc_c ) ; } for ( int k = 0 ; k < cnum ; k++ ) { atom c = backbone [ k ] ; for ( int l = 0 ; l < others [ k].length ; l++ ) { atom a = others[k][l ] ; if ( a instanceof h ) new harmonicbond ( this , c , a , bondc_h ) ; else if ( a instanceof o ) new harmonicbond ( this , c , a , bondc_o ) ; } } } .... lines 12 - 13 consider the case of oxygen atoms , to build acid molecules , which is not considerered here .the molecule shown on fig .[ figure : carbonchain ] is made of 20 atoms , 19 bonds , 36 valence angles , and 45 dihedral angles .reactive machines are basically provided by the class simulation which extends the class machine .an application can simply be defined as an extension of simulation , as in : .... public class minimalapp extends simulation { int cnum = 6 ; double timestep = 1e-3 ; void molecule ( double x , double y , double z ) { molecule mol = new carbonchain ( this , cnum , x , y , z,0,0,0 ) ; mol.context.timestep = timestep ; mol.build ( ) ; mol.registerin ( this ) ; } public minimalapp ( ) { createuniverse ( ) ; double dist = 0.4 ; molecule ( -dist,0.5,0 ) ; molecule ( dist,0.5,0 ) ; } public static void main ( string [ ] args ) { standalone ( new minimalapp ( ) ) ; } } .... the number of carbon atoms cnum is set to 6 and the time - step is set to the femto - second ( lines 3 and 4 ; the basic time unit of the system is the pico - second ) . a function which creates a molecule is defined lines 5 - 11 .the molecule is built and registered in the simulation ( which is denoted by this ) ; the registration of the molecule entails the registration of all its components .the time - step of the created molecule is also set by the function .the constructor of the class is defined in lines 12 - 18 .first , the createuniverse method provided by java3d is called to initialise the graphics , then two molecules are created .the definition of the main java method terminates the definition of the class minimalapp .the intial state of the simulation is shown on left of fig .[ figure : simul1 ] and the result after 50 ns ( instants ) is shown on the right .the evolution of the energy up to 200 ns ( the internal energy unit is ) is shown on fig .[ figure : stability ] to illustrate the stability of the resolution ( actually , stability has been tested up to one micro - second , that is instants ) . the mean value is with standard deviation .the energy is negative as result of the attraction due to van der waals forces . [ !htb ] [ ! htb ]the domain of physical simulations is huge and we shall thus only consider the use of reactive programming for implementing them , and the implementations of md systems . the application of reactive programming to newtonian physics has been initiated by alexander samarin in where several 2d `` applets '' are proposed to illustrate the approach . cellular automata ( ca ) have been used in several contexts of physics .the implementation of ca using a reactive programming formalism is described in . in described a system that mimicks several aspects of quantum mechanics ( namely , self - interference , superposition of states , and entanglement ) .the system basically relies on a cellular automaton plunged into a reactive based simulation whose instants define the global time .actually , this can not be strictly speaking considered as a physical simulation but more as a kind of `` proof of concept '' .a large number of md simulation systems exist ( for example and , which are both open - source software ) .they are implemented in fortran or c / c++ . at the implementation level , the focus is put on real - parallelism and the use of multi - processor and/or multi - core architectures . on the contrary , we have choosen to use the java language , and to put the focus more on expressivity than on efficiency , by using the logical parallelism of rp .we have adopted an open - source approach and integrated the 3d aspects directly in the system , by using java3d .we have shown that rp can be considered as a valuable tool for the implementation of simulations in classical physics .we have illustated our approach by the description of a md system coded in rp .we plan to extend this md system in several directions : * introduction of several multi - scale , multi - time - step aspects , building thus a true mtms system .note that the dynamic creation / destruction possibilities offered by rp will be central for the implementation of several notions ( chimical reactions and reconstruction techniques , for example ) .* use of real - parallelism .a first study has lead to the definition of a new version of sugarcubes ( called sugarcubesv5 ) in which gpu - based approaches become possible .the use of multi - processor machines should also of course be of great interest .
|
we consider the reactive programming ( rp ) approach to simulate physical systems . the choice of rp is motivated by the fact that rp genuinely offers logical parallelism , instantaneously broadcast events , and dynamic creation / destruction of parallel components and events . to illustrate our approach , we consider the implementation of a system of molecular dynamics , in the context of java with the java3d library for 3d visualisation . [ [ keywords . ] ] keywords . + + + + + + + + + concurrency ; parallelism ; reactive programming ; physics ; molecular dynamics .
|
o uso da transformada de laplace na equao de schrdinger remonta ao prprio erwin schrdinger ao lidar com o tomo de hidrognio ( veja tambm ) .mais recentemente , os estados ligados em um potencial de morse tambm foram obtidos por meio da tcnica da transformada de laplace .a ideia subjacente ao mtodo da transformada de laplace para resolver uma equao diferencial a converso em uma equao transformada que possa ser resolvida com maior simplicidade .em seguida deve - se executar a inverso da transformada de laplace para obter a funo original do problema .eis uma tarefa que pode ser rdua e at mesmo infactvel .a equao de schrdinger com um potencial constitudo de uma soma de duas funes delta de dirac , doravante denominado potencial delta duplo , tem sido usada para modelar as foras de troca entre os dois ncleos no on de hidrognio molecular tanto quanto na descrio da transferncia de um nucleon de valncia durante uma coliso nuclear . a bem da verdade , os estados estacionrios de uma partcula em um potencial delta duplo ocupa as pginas de muitos livros - texto - .os possveis estados ligados so encontrados pela localizao dos polos complexos da amplitude de espalhamento ou por meio de uma soluo direta da equao de schrdinger baseada na descontinuidade da derivada primeira da autofuno , mais a continuidade da autofuno e seu bom comportamento assinttico .neste trabalho apresenta - se uma abordagem alternativa para busca de estados ligados do potencial delta duplo baseada na transformada de laplace .com este procedimento a equao de schrdinger independente do tempo transmuta - se numa equao algbrica de primeira ordem para a transformada de laplace da autofuno .o processo da inverso da transformada de laplace inversa amigvel e a soluo do problema de estados ligados no requer qualquer conhecimento sobre a descontinuidade da derivada primeira da autofuno .a abordagem do potencial delta duplo via transformada de laplace , alm de estender a aplicabilidade do mtodo de laplace mecnica quntica , fornece uma nova ponte entre o material que os estudantes tipicamente aprendem em um curso de fsica matemtica e um problema fsico interessante .a transformada de laplace de uma funo de ordem exponencial , i.e. e , converge se . a transformada de laplace uma operao linear e o mesmo se d com a transformada inversa . a propriedade de deslocamento a funo degrau de heaviside , segue diretamente da definio da transformada de laplace .tambm segue de ( [ l1 ] ) que usando as definies equao de schrdinger independente do tempo para uma partcula de massa sujeita a um potencial delta duplo simtrico \label{pot}\]]pode ser escrita na forma \phi \left ( x\right ) + k^{2}\phi \left ( x\right ) = 0 , \label{eq2}\]]onde a plica ( ) denota a derivada em relao a , uma constante real e . multiplicando esta equao por e integrando em relao a de a : , \label{par}\]]onde a transformada de laplace de .haja vista que e so limitadas no infinito , temos a garantia da existncia de tanto quanto a anulabilidade da ltima parcela de ( [ par ] ) .resulta da que temos uma equao algbrica para cuja soluo reconstruo da autofuno para , realizada pela inverso da transformada de laplace , pode ser obtida prontamente usando ( [ l4])-([l3]): .\end{aligned}\]]obviamente no quadraticamente integrvel se .entanto , com uso das identidades - se verificar que se com ( ) e - se que .deste modo podemos escrever a autofuno para estados ligados , definida no semieixo positivo , na forma .\label{funco}\end{aligned}\]]no obstante a singularidade do potencial em , a autofuno uma funo contnua .se no fosse assim a equao de schrdinger envolveria derivadas da funo delta de dirac .a continuidade de em implica que ltima relao combinada com ( [ qua1 ] ) resulta em haja vista que o potencial par sob a troca de por ( a funo delta de dirac invariante sob inverso espacial ) , a extenso da autofuno ( [ funco ] ) para todo o eixo pode ser expressa como uma funo de paridade definida pela imposio de condies de contorno apropriadas sobre e na origem .por causa da continuidade da autofuno e sua derivada em ( para , estas condies podem ser cominadas de duas formas distintas : a funo par obedece condio de neumann homognea , enquanto a funo mpar obedece condio de dirichlet homognea .deste modo a equao ( [ phipil ] ) torna - se uma equao para a varivel .portanto , para temos a condio de quantizao outro lado , para temos ( ) a funo sinal , e a condio de quantizao manifesta - se agora na forma que a funo limitada entre os valores e ao passo que no se inclui dentro destes limites quando , podemos inferir que no h possibilidade de soluo para estados ligados se ( potencial repulsivo ) .para um potencial atrativo ( ) , a natureza do espectro resultante das solues das equaes transcendentais ( [ quap ] ) e ( [ quai ] ) podem ser visualizadas na figura [ fig1 ] , onde constam esboos dos membros direito e esquerdo de ( [ quap ] ) e ( [ quai ] ) . as abscissas das intersees de e fornecem as solues desejadas .da - se depreender da figura [ fig1 ] que sempre h uma e somente uma soluo para o caso de uma autofuno simtrica mas a existncia de uma soluo para o caso de uma autofuno antissimtrica sucede to somente quando .isto se d porque oscula em .seja l como for , o estado fundamental corresponde a uma autofuno par .os leitores podem verificar que a metodologia aqui apresentada pode ser estendida com facilidade para um potencial constitudo de uma soma de um nmero arbitrrio de funes delta de dirac dispostas simetricamente em relao origem .contudo , o caso de um potencial delta de dirac localizado na origem requer uma modificao na definio da transformada de laplace que inclua a origem no domnio de integrao .de fato, sido usada por alguns autores - para incorporar as condies sobre em .entretanto , o uso de no caso de um potencial delta de dirac localizado na origem demanda o conhecimento da descontinuidade da derivada primeira da autofuno .
|
o problema de estados ligados em um potencial delta duplo revisto com o uso do mtodo da transformada de laplace . bem diferentemente de mtodos diretos , nenhum conhecimento acerca da descontinuidade de salto da derivada primeira da autofuno requerida para se determinar a soluo.*palavras - chave : * duplo delta , estado ligado , transformada de laplace . the problem of bound states in a double delta potential is revisited by means of laplace transform method . quite differently from direct methods , no knowledge about the jump discontinuity of the first derivative of the eigenfunction is required to determine the solution . keywords : double delta , bound state , laplace transform .
|
testing hypotheses on the covariance matrix of the disturbances in a regression model is an important problem in econometrics and statistics , a prime example being testing the hypothesis of uncorrelatedness of the disturbances .two particularly important cases are ( i ) testing for autocorrelation in time series regressions and ( ii ) testing for spatial autocorrelation in spatial models ; for an overview see and . for testing autocorrelation in time series regressionsthe most popular test is probably the durbin - watson test .while low power of this test against highly correlated alternatives in some instances had been noted earlier by and , seems to have been the first to show that the limiting power of the durbin - watson test as autocorrelation goes to one can actually be zero .this phenomenon has become known as the _ zero - power trap_. the work by has been followed up and extended in the context of testing against autoregressive disturbances of order one in , , and ; see also and .loosely speaking , these results show that the power of the durbin - watson test ( and of a class of related tests ) typically converges to either one or zero ( depending on whether a certain observable quantity is below or above a threshold ) as the strength of autocorrelation increases , provided that there is no intercept in the regression ( in the sense that the vector of ones is not in the span of the regressor matrix ) ; in case an intercept is in the regression , the limit is typically neither zero nor one .some of these results were extended in to the case where the durbin - watson test is used , but the disturbances are fractionally integrated . in the context of spatial regression modelskramer2005 showed that the cliff - ord test can similarly be affected by the zero - power trap . set out to build a general theory for power properties of tests of a hypothesis on the covariance matrix of the disturbances in a linear regression , that would also uncover the mechanism responsible for the phenomena observed in the before - cited literature . while the intuition behind the general results in is often correct , the results themselves and/or their proofs are not .for example , the main result ( theorem 1 in ) , on which much of that paper rests , has some serious flaws : parts of the theorem are incorrect , and the proofs of the correct parts are substantially in error .in particular , the proof in is based on a `` concentration '' effect , which , however , is simply not present in the setting of the proof of theorem 1 in , as the relevant distributions `` stretch out '' rather than `` concentrate '' .this has already been observed in , where a way to circumvent the problems was suggested .mynbaev s approach , which is based on the `` stretch - out effect '' , is somewhat cumbersome in that it requires the development of tools dealing with the `` stretch - out effect '' ; furthermore , the treatment in is given only for a subclass of the tests considered in and under more restrictive distributional assumptions than in . in the present paperwe now build a theory as envisioned in at an even more general level .in particular , we allow for general invariant tests including randomized ones , we employ weaker conditions on the underlying covariance model as well as on the distributions of the disturbances ( e.g. , we even allow for distributions that are not absolutely continuous ) .one aspect of our theory is to show how invariance of the tests considered can be used to convert martellosio s intuition about the `` concentration '' effect into a precise mathematical argument .furthermore , advantages of this approach over the approach in are that ( i ) standard weak convergence arguments can be used ( avoiding the need for new tools to handle the `` stretch - out '' effect ) , ( ii ) more general classes of tests can be treated , and ( iii ) much weaker distributional assumptions are required . the general theory built in this paperis then applied to tests for spatial autocorrelation , which , in particular , leads to correct versions of the results in that pertain to spatial models .a further contribution of the present paper is a characterization of the situation where no invariant test can distinguish the null hypothesis of no correlation from the alternative .this characterization helps to explain , and provides a unifying framework for , phenomena observed in , , , , and .the paper is organized as follows : after laying out the framework in section [ framework ] , the general theory is developed in section [ main ] .the main results are theorems [ mt ] , [ bt ] , and [ bt2 ] . theorem[ mt ] , specialized to nonrandomized tests , shows that under appropriate assumptions the power of an invariant test converges to or as the `` boundary '' of the alternative is approached .the limit is or depending on whether a certain observable vector ( the `` concentration direction '' of the underlying covariance model ) belongs to the complement of the closure or to the interior of the rejection region of the test .this result constitutes a generalization of the correct parts of theorem 1 in ( the proofs of which in are incorrect ) .theorems [ bt ] and [ bt2 ] deal with the case where the concentration direction belongs to the boundary of the rejection region , a case excluded from theorem [ mt ] , thus providing correct versions of the incorrect part of theorem 1 in .the general results obtained in theorems [ mt ] , [ bt ] , and [ bt2 ] are then specialized in section [ t_b ] to the important class of tests based on test statistics that are ratios of quadratic forms .the relationship between test size and the zero - power trap is discussed in section [ alpha_star ] , before indistinguishability of the null and alternative hypothesis by invariant tests is characterized in section [ ip ] .extensions of the general theory are discussed in section [ gen ] ; in particular , we discuss ways of relaxing the distributional assumptions .section [ spatial ] is devoted to applying the general theory to testing for spatial correlation , while section [ ar1c ] contains an application to testing for autocorrelation in time series regression models . whereas the problems with theorem 1 in are discussed in section [ main ] as well as in appendix [ newapp ] , problems with a number of other results in are dealt with in appendix [ a2 ] .proofs can be found in appendices [ app_proofs ] and [ app_proofs2 ] .some auxiliary results are collected in appendix [ auxil ] .as in , we consider the problem of testing a hypothesis on the covariance matrix of the disturbance vector in a linear regression model . given parameters , , and , where is some prespecified positive real number , the model is is a non - stochastic matrix of rank with and .[ in case we identify , the space of real matrices , with and with . ]the disturbance vector is assumed to be an random vector with mean zero and covariance matrix , where is a _ known _ function from to the set of symmetric and positive definite matrices .without loss of generality ( w.l.o.g . ) is assumed to be the identity matrix .[ the case can be immediately reduced to the case considered here by use of a transformation like .] we assume furthermore that , given , , and , the distribution of is completely specified ( but see remark [ rem_gen_1.5 ] in section [ gen ] for a relaxation of this assumption ) . note that this does not imply in general that the distribution of is independent of , , and ( although this will often be the case in important examples ) .in contrast to we do not impose any further assumptions on the distribution of at this stage ( see remark [ modelm10 ] below for a discussion of the additional assumptions in ) .all additional distributional assumptions needed later will be stated explicitly in the theorems . under the preceding assumptions , model ( [ linmod ] )induces a _ parametric _ family of distributions the sample space where stands for the distribution of under the given parameters , , and , and where denotes the borel -field on . the expectation operator with respect to ( w.r.t . ) shall be denoted by .if is a borel - measurable mapping from to , we shall denote by the pushforward measure of under , which is defined on . as usual , a borel - set will be said to be a -null set if it is a null set relative to every element of .[ modelm10 ] _ ( comments on assumptions in ) _ ( i ) in , p.154 , additional assumptions on the distribution of are imposed : for example , it is assumed that possesses a density which is positive everywhere on , is larger at than anywhere else , and satisfies a continuity property ( the meaning of which is not completely transparent ) .these assumptions are in general stronger than what is needed ; for example , as we shall see , some of our results even hold for discretely distributed errors .\(ii ) in it is furthermore implicitly assumed that for fixed , the distribution of ( or , equivalently , the distribution of ) does not depend on and .this becomes apparent on p. 156 , where it is claimed that the testing problem under consideration is invariant w.r.t .the group ( defined below ) in the sense of .in fact , mart10 appears to even assume implicitly that the distribution of is independent of _ all _ the parameters , , and ; cf . ,e.g. , the first line in the proof of theorem 1 on p. 182 in .we consider the problem of testing against .more precisely , the null hypothesis and the alternative hypothesis are given by the implicit understanding that always .we note that typically one would impose an additional ( identifiability ) condition such as , e.g. , for every and every in order to ensure that and are disjoint , and hence that the test problem is meaningful. holds for some , and some , there may still be additional identifiying information present in the distributions that goes beyond the information contained in first and second moments . ] the results on the power behavior as in the present paper are valid without any such explicit identifiability condition , but note that one of the basic assumptions ( assumption [ asc ] ) underlying most of the results automatically implies that for every holds at least for in a neighborhood of .a ( randomized ) test is a borel - measurable function from the sample space to ] must hold .recall from remark range_for_kappa that in case is not the largest eigenvalue of , we know that ( in fact , neither nor its complement are -null sets if additionally holds , whereas is the complement of a non - empty -null set if ) . hence proposition [ prop_bound ] shows that then belongs to the boundary of . [ since holds for some if is not the largest eigenvalue of , this can alternatively be deduced from remark [ rem_bt2](iii ) . ] in case is the largest eigenvalue of , then is empty .the last case shows that , although theorem [ bt2 ] is geared to the case where belongs to the boundary of the rejection region , its assumptions do not rule out other cases . furthermore , the case where is an eigenvector of with eigenvalue satisfying shows that theorem [ bt2 ] also applies to cases where , although belongs to the boundary of the rejection region , the limiting rejection probabilities are not necessarily in . [ rem_accu]_(comments on the set of accumulation points ) _( i ) if one can choose in the second part of the preceding theorem then reduces to the singleton and the statement in part 2 simplifies accordingly .a similar remark applies to the first part of the theorem in case and/or can be chosen .\(ii ) it is not difficult to see that the accumulation points as given in ( [ even ] ) and ( [ odd ] ) depend continuously on and .[ this follows from the portmanteau theorem observing that as well as depend continuously on and , respectively , and that both expressions are nonzero almost surely as shown in the proof of theorem [ bt2 ] . ]since as well as are compact , the question of whether or not the set of accumulation points is bounded away from ( or , respectively ) then just reduces to the question as to whether every accumulation point is larger than ( smaller than , respectively ). the latter question can often easily be answered by examining the explicit expressions provided by ( [ even ] ) and ( [ odd ] ) . for an example see the remark immediately below .\(iii ) suppose in the second part of the theorem .observe that then with by remark [ rem_bt2](ii ) .hence , in case and are not collinear , the accumulation point given by ( [ odd2 ] ) is in the open interval .if and are collinear , then the accumulation point is either or .we now illustrate the results obtained so far by applying them to tests based on the statistic defined in ( [ t_quadratic ] ) .we note that , under regularity conditions ( including appropriate distributional assumptions ) and excluding degenerate cases , point - optimal invariant tests and locally best invariant tests are of this form with and , respectively , with denoting the derivative at ( ensured to exist under the aforementioned regularity conditions ) , see , e.g. , .- invariant tests .as they are also -invariant , they are a fortiori also point - optimal ( locally best ) tests in the class of -invariant tests . ]recall that under the assumptions in the vector given by assumption [ asc ] corresponds to the eigenvector in mt1 , possibly up to a sign change .for that reason we impose assumption [ asc ] in all of the three corollaries that follow , although this assumption would not be needed for the second one of the corollaries ( but note that then would be determined by assumption [ ascii ] only ) . furthermore , recall from remark [ range_for_kappa ] that occurs if and only if ( the interval being non - empty if and only if ) .we shall in the following corollaries hence always assume that is in that range and thus shall exclude the trivial cases where or from the formulation of the corollaries .the first corollary is based on theorem [ mt ] .recall that the conditions in this corollary are weaker than the conditions used in mt1 ( cf .remark rmt ) and that sufficient conditions for the high - level assumption [ asd ] have been given in proposition [ aspexii ] ( under which the rejection probabilities actually do neither depend on nor ) .[ lem_illust_1]suppose assumptions [ asc ] and [ asd ] are satisfied .assume that with .then we have : 1 . ( i.e. , ) implies for every and .entails in view of ( [ t_quadratic ] ) and . ] 2 . and ( i.e. , ) implies for every and .it is worth pointing out here that the second case , i.e. , the zero - power trap , can occur even for point - optimal invariant or locally best invariant tests as has been documented in the literature cited in the introduction .the next two corollaries now deal with the case where belongs to the boundary of the rejection region .they are based on theorems [ bt ] and [ bt2 ] , respectively . for simplicity of presentationwe concentrate only on the case of elliptically symmetric families .we remind the reader that in the two subsequent corollaries the rejection probabilities actually neither depend on nor , i.e. , holds .[ lem_illust_2]suppose assumptions [ asc ] and [ ascii ] are satisfied with the same vector .is the same in both assumptions , although this does not impose a restriction here .this is so because of remark [ rascii ] and since must hold in this corollary : suppose would hold. then would follow in view of and the assumption .but this would be in conflict with . ] furthermore , assume that is an elliptically symmetric family ( i.e. , assumption [ asdr ] holds with a spherically distributed ) and .assume that with .suppose holds .then exists and equals where is a multivariate gaussian random vector with mean zero and covariance matrix .furthermore , the limit satisfies , whereas it equals in case .in case the rejection region is the complement of a -null set . as discussed in remark [ rbt](iv ) , we then even have for every , , and ( although we do not require to possess a density ) . ]the next result covers the case where .recall from proposition [ prop_bound ] and remark [ range_for_kappa ] that this is equivalent to and with .note that ] for where here .additionally we assume that the distribution of is a fixed distribution independent of , , and .is a random vector whose distribution is independent of , , and , cf . , p. 155 . as discussed in remark [ modelm10](ii ) ,it is also implicitly assumed in that the distribution of is independent of , , and .note that the latter random vector is connected to via multiplication by an orthogonal matrix , say .if is symmetric , holds and hence both implicit assumptions are equivalent .however , for nonsymmetric , these two implicit assumptions will typically be compatible only if the distribution of is spherically symmetric . ] _ the above are the maintained assumptions for the sem considered in this section . _ the parametric family of probability measures induced by ( linmod ) and ( [ sar ] ) under the maintained assumptions will be denoted by .if is an ( elementwise ) nonnegative and irreducible matrix with zero elements on the main diagonal , a frequent assumption for spatial weights matrices , then the above assumptions on are satisfied by the perron - frobenius theorem and is then the perron - frobenius root of ( see , e.g. , , theorem 8.4.4 , p. 508 ) . in this caseone can always choose to be entrywise positive .the next lemma shows identifiability of the parameters in the model , identifiability of being trivial .an immediate consequence is that the two subsets of corresponding to the null hypothesis and alternative hypothesis are disjoint. and [ sarl ] actually hold without the additional assumption on the distribution of made above . ][ isar1 ] if holds for and ( ) then and .we next verify that the spatial error model satisfies assumptions [ asc ] , [ asdr ] , and [ ascii ] , and that it satisfies assumption [ asd ] under a mild condition on the distribution of .the first claim in lemma [ sarl ] also appears in , lemma 3.3 .[ sarl ] satisfies assumption [ asc ] with as well as assumption [ ascii ] with , , , and .[ sarl_2 ] satisfies assumption [ asdr ] with and a random vector distributed like .furthermore , if the distribution of is absolutely continuous w.r.t . , or , more generally , if and the distribution of is absolutely continuous w.r.t .the uniform distribution on the unit sphere , then satisfies assumption [ asd ] . given the preceding two lemmata the main results of section [ main ] , i.e. ,theorems [ mt ] , [ bt ] , and [ bt2 ] , can be immediately applied to obtain results for the spatial error model . rather than spelling out these general results, we provide the following two corollaries for the purpose of illustration and thus do not strive for the weakest conditions .these corollaries provide , in particular , correct versions of the claims in corollary 1 in .recall that by the assumed -invariance the rejection probabilities in the subsequent results do in fact neither depend on nor , cf .remark [ invariance ] .[ cor1new ] given the maintained assumptions for the sem suppose furthermore that either ( i ) the distribution of possesses a -density that is continuous -almost everywhere and that is positive on an open neighborhood of the origin except possibly for a -null set , or ( ii ) the distribution of is spherically symmetric with no atom at the origin .then for every -invariant test the following statements hold : 1 .if is continuous at then for every , , we have for , .2 . suppose satisfies for every ( which is certainly the case if ) .then for every , , we have for , .the limit is strictly between and provided neither -almost everywhere nor -almost everywhere holds .[ the matrix is defined in lemma [ sarl ] . ]if is the indicator function of a critical region , we have for every , , and as , : * implies . * implies .* implies .the limiting probability is strictly between and provided neither nor its complement are -null sets .4 . if is the indicator function of the critical region given by ( [ quadratic ] ) with satisfying and with , then we have for every , , and as , : * implies .entails in view of ( [ t_quadratic ] ) and . ]* and implies .* implies .the limiting probability is strictly between and provided , while it is for .if ( ) -almost everywhere in part 2 or in the last claim of part 3 of the preceding corollary , then ( or ) holds for all , , and , and hence the same holds a fortiori for the accumulation points , see remark [ rbt](iv ) .parts 3 and 4 of the preceding corollary are silent on the case ( recall that holds provided ) .the next corollary provides such a result for the important critical regions under an elliptical symmetry assumption on and under the assumption of a symmetric weights matrix .more general results without the symmetry assumption on , without the elliptical symmetry assumption , and for more general classes of tests can of course be obtained from theorem [ bt2 ] .[ cor2new ] given the maintained assumptions for the sem suppose furthermore that the distribution of is spherically symmetric with no atom at the origin and that is symmetric .let the critical region be given by ( [ quadratic ] ) .assume ( i.e. , and with hold ) . 1 .suppose is an eigenvector of with eigenvalue .then and , , and for every , , where is a multivariate gaussian random vector with mean zero and covariance matrix .the limit in ( [ li ] ) is strictly between and if , whereas it equals in case .2 . suppose is not an eigenvector of .then for , , and for every , .[ kraemer]_(some comments on ) _ ( i ) kramer2005 considers `` test statistics '' of the form for general matrices and . however , this ratio will then in general not be observable and thus will not be a test statistic .fortunately , the problem disappears in the leading cases where and are such that .the same problem also appears in and .\(ii ) the proof of the last claim in theorem 1 of is in error , as contrary to the claim in the quantity need not be strictly positive .this has already be noted by mart12 , footnote 5 .\(iii ) theorem 2 in is not a theorem in the mathematical sense , as it is not made precise what it means that the limiting power `` is in general strictly between and '' .as discussed earlier , point - optimal invariant and locally best invariant tests are in general not immune to the zero - power trap .the next result , which is a correct version of proposition 1 in , now provides a necessary and sufficient condition for the cliff - ord test ( i.e. , ) and a point - optimal invariant test ( i.e. , ) in a pure sar - model ( i.e. , ) to have limiting power equal to for every choice of the critical value ( excluding trivial cases ) . for a discussion of the problems with proposition 1 in see appendix [ prop1 ] . in the subsequent propositionwe always have as a consequence of the assumptions .we also note that the condition in this proposition precisely corresponds to the condition that the test has size strictly between zero and one , cf .remark [ rem_d3new ] .furthermore , observe that while the statement that the limiting power ( as ) equals for every is in general clearly stronger than the statement that , proposition [ d3new ] shows that these statements are in fact equivalent in the context of the following result . finally , recall that in view of invariance and the maintained assumptions of this section the rejection probabilities do neither depend on nor .[ prop1new ] given the maintained assumptions for the sem , suppose that the distribution of is absolutely continuous w.r.t . with a density that is positive on an open neighborhood of the origin except possibly for a -null set .furthermore , assume that .let ( i ) or ( ii ) for some .consider the rejection region given by ( quadratic ) with .then for every and we have in both cases ( i ) and ( ii ) : for every as , , if and only if .in particular , if is ( elementwise ) nonnegative and irreducible , then , for both choices of , is equivalent to being an eigenvector of .the next proposition is a correct version of lemma e.4 in ; see appendix [ prop1 ] for a discussion of the shortcomings of that lemma .it provides conditions under which the cliff - ord test and point - optimal invariant tests in a sem with exogenous variables are not subject to the zero - power trap and even have limiting power equal to .[ e4new ] given the maintained assumptions for the sem , suppose that the distribution of is absolutely continuous w.r.t . with a density that is positive on an open neighborhood of the origin except possibly for a -null set .suppose further that , that is independent of , and that .let ( i ) and suppose that , or ( ii ) for some .consider the rejection region given by ( quadratic ) .then for every and we have in both cases ( i ) and ( ii ) : for every as , . [ re4 ] ( i ) the condition that is independent of is easily seen to be satisfied , e.g. , if is symmetric and if ( and thus , in particular , if ) .\(ii ) if in the preceding proposition is symmetric and holds , then the condition in case is automatically satisfied .this can be seen as follows : since we can represent as for some with . on the one hand ,the largest eigenvalue of , as the maximum of over all normalized vectors , is therefore not less than . on the other hand , noting that , the maximum of over all normalized vectors is not larger than the maximum of over all normalized vectors , which shows that the largest eigenvalue of is equal to . because as the largest eigenvalue of has algebraic multiplicity by the assumptions in this section and since is symmetric , we see that the algebraic multiplicity of as an eigenvalue of must also be .but then follows since has been assumed in the proposition .\(iii ) if or if , but holds for , then the test statistic degenerates to a constant ( and the proposition trivially holds as is then empty ) .let be as in section [ framework ] , let be as in section [ sem ] , and consider the spatial lag model ( slm ) of the form , , and , and where is a mean zero random vector with covariance matrix . as in section [ sem ] , we assume that the distribution of is a fixed distribution independent of , , and . _ the above are the maintained assumptions for the slm considered in this section ._ because the slm and the sem have the same covariance structure , a simple consequence of lemma [ isar1 ] is that also the parameters of the slm are identifiable . for we can rewrite the above equation as , in case the spatial lag model of order onecoincides with the sar(1 ) model . for , however , the slm does _ not _ fit into the general framework of section [ main ] of the present paper .in particular , while the problem of testing versus is still invariant under the group , it is typically no longer invariant under the larger group . nevertheless we can establish the following result which is similar in spirit to theorem [ mt ] . in the following result, denotes the distribution of given by ( [ sl2 ] ) under the parameters , , and and denotes the corresponding expectation operator .[ theorem_splm]given the maintained assumptions for the slm , assume furthermore that the distribution of does not put positive mass on a proper affine subspace of .let be a -invariant test .if is continuous at then for every and we have for , .in particular , if is the indicator function of a critical region we have for every , , and as , : * implies . * implies .the above result provides a correct version of the first and third claim in proposition 2 in , the proofs of which in suffer from the same problems as the proofs of the corresponding parts of mt1 .the second claim in proposition 2 in is incorrect for the same reasons as is the second part of mt1 .while theorems [ bt ] and [ bt2 ] provide correct versions of the second claim of mt1 , these results can not directly be used in the context of the slm as this model does not fit into the framework of section [ main ] as noted above .we do not investigate this issue any further here . for every ) invariance w.r.t . can again become an appropriate assumption on a test statistic and a version of theorem [ bt ] can then be produced .we abstain from pursuing this any further . ]we close our discussion of spatial regression models by applying the results on indistinguishability developed in section [ ip ] to these models .it turns out that a number of results in ( namely all parts of proposition 3 , 4 , and 5 that are based on degeneracy of the test statistic ) as well as the first part of the theorem in are consequences of an identification problem in the distribution of the maximal invariant statistic ( more precisely , an identification problem in the `` reduced '' experiment ) .theorem [ i d ] and corollary [ idd ] thus provide a simple and systematic way to recognize when this identification problem occurs .consider the sem with the maintained assumptions of section [ sem ] and additionally assume _ for this paragraph only _ that the distribution of the error is spherically symmetric .as shown in section [ ip ] , the condition for the identification problem in the reduced experiment to occur , entailing a constant power function for any -invariant ( even for any -invariant ) test , is then that is a multiple of for every . as can be seen from lemma auxid in appendix [ app_proofs ] , a sufficient condition for thisis that is contained in an eigenspace of for every , a condition that appears in proposition 3 of , which is a statement about point - optimal invariant and locally best invariant tests .thus the corresponding part of this proposition is an immediate consequence of corollary [ idd ] ; moreover , and in contrast to this proposition in mart10 , it now follows that this result holds more generally for _ any _ -invariant ( even any -invariant ) test and that the gaussianity assumption in this proposition can be weakened to elliptical symmetry . in a similar way , propositions 4 and 5 in make use of the conditions that is symmetric and is contained in an eigenspace of . in the subsequent lemma we show that the condition that is contained in an eigenspace of is already sufficient for to be a multiple of for every . thus the subsequent lemma combined with corollary [ idd ] establishes , in particular , the respective parts of propositions 4 and 5 in .the preceding comments are of some importance as there are several problems with propositions 3 , 4 , and 5 in which are discussed in appendix [ p3 ] . [ semid ]let be a weights matrix as in section [ sem ] and let be an matrix ( ) such that satisfied for some eigenvalue of .then for every . in the following examplewe show that the first half of the theorem in mart11 is a special case of remark [ idd2 ] following corollary [ idd ] combined with the preceding lemma .\(i ) consider the sem with the maintained assumptions of section [ sem ] .suppose that is an ( ) equal weights matrix , i.e. , is constant for and zero else and that contains the intercept .without loss of generality we assume for . clearly , is symmetric and has the eigenvalues and .the eigenspace corresponding to is spanned by the eigenvector and the other eigenspace consists of all vectors orthogonal to . sinceevery element of is orthogonal to we have , by lemma [ semid ] together with remark [ idd2 ] , the power function of every -invariant test must be constant .( ii ) consider next the slm with the maintained assumptions of section splm with the same weights matrix and the same design matrix as in ( i ) .observe that can be written as , a matrix which obviously maps into as the intercept has been assumed to be an element of .consequently , also maps into for every . because is nonsingular for in that range , it follows that this mapping is onto and furthermore that also maps into in a bijective way . as a consequence ,the mean of , which equals , is an element , say , of for every and . for an equal weights matrix maps into itself , however, does not make sense as it is based on an incorrect expression for , which is incorrectly given as .] let be any -invariant test .then by -invariance we have coincides with the power function in a sem as in ( i ) above and thus is independent of , , and , showing that the power function of any -invariant test in the slm considered here is constant . this section we briefly comment on the case where the error vector in ( [ linmod ] ) has covariance matrix for with the -th element of given by ( case i ) or ( case ii ) . clearly , case i corresponds to testing against positive autocorrelation , while case ii corresponds to testing against negative autocorrelation .more precisely , in both cases we assume that is distributed as , where has mean zero , has covariance matrix , and has a fixed distribution that is spherically symmetric ( and hence does not depend on any parameters ) ; in particular , assumption [ asdr ] is maintained .furthermore , assume that ._ we shall refer to these assumptions as the maintained assumptions of this section_. this framework clearly covers the case where the vector is a segment of a gaussian stationary autoregressive process of order . in case iit is now readily verified that assumption [ asc ] holds with , while in case ii this assumption is satisfied with .the validity of assumption [ asd ] then follows from proposition aspexii .furthermore , assumption [ ascii ] ( more precisely , the equivalent condition given in lemma [ ar1 ] ) has been shown to be satisfied in case i as well as in case ii in lemma g.1 of , where the form of the matrix ( denoted by in that reference ) is also given ; this lemma also establishes condition ( [ offdiag ] ) in view of the fact that obviously for ( in case i as well as in case ii ) .we thus immediately get the following result as a special case of the results in section main : [ ts]suppose the maintained assumptions hold .let denote in case i while it denotes in case ii . 1 .then every -invariant test satisfies the conclusions 1.-4 .of corollary [ cor1new ] subject to replacing by , by , and where now represents a square - root of the matrix given in lemma g.1 of .2 . let the critical region be given by ( quadratic ) .assume ( i.e. , and with hold ) .then : + \(i ) suppose is an eigenvector of with eigenvalue . then and , , and for every , , where is a multivariate gaussian random vector with mean zero and covariance matrix .the limit in ( [ li_1 ] ) is strictly between and if , whereas it equals in case .+ \(ii ) suppose is not an eigenvector of . then for , , and for every , .the proof of the corollary is similar to the proof of corollaries cor1new and [ cor2new ] and consists of a straightforward application of theorems [ mt ] , [ bt ] , and corollary [ lem_illust_3 ] , noting that condition ( [ offdiag ] ) has been verified in lemma g.1 of . at the expense of arriving at a more complicated result ,some of the maintained assumptions like the spherical symmetry assumption could be weakened , while nevertheless allowing the application of the results in section [ main ] . in the literatureoften the alternative parameterization for the covariance matrix of is used , which just amounts to parametrizing as . in view of remark invariance and -invariance of the tests considered, such an alternative reparameterization has no effect on the results in this section at all . even after specializing to the gaussian case, the preceding corollary provides a substantial generalization of a number of results in the literature in that ( i ) it allows for general -invariant tests rather than discussing some specific tests , and ( ii ) provides explicit expressions for the limiting power also in the case where the limit is neither zero nor one : appears to have been the first to notice that the zero - power trap can arise for the durbin - watson test in that he showed that the limiting power ( as the autocorrelation tends to ) of the durbin - watson test can be zero when one considers a linear regression model without an intercept and with the errors following a gaussian autoregressive process of order one .more precisely , he established that in this model the limiting power is zero ( is one ) if in our notation the vector is outside the closure ( is inside the interior ) of the rejection region of the durbin - watson test . based on numerical results , he also noted that the zero - power trap does not seem to arise in models that contain an intercept .subsequently , showed that indeed in models with an intercept the limiting power of the durbin - watson test ( except in degenerate cases ) is always strictly between zero and one .the results in and just mentioned are extended in from the durbin - watson test to tests that can be expressed as ratios of quadratic forms , see also .(i ) . ][ we note that and kramerzeisel1990 additionally also consider the case where the autocorrelation tends to . ]all these results can be easily read off from part 1 of our corollary [ ts ] .the analysis in , , and always excludes a particular case , which is treated in for the durbin - watson test .this result is again easily seen to be a special case of part 2 of our corollary [ ts ] .furthermore , shows that for any sample size and number of regressors a design matrix exists such that zero - power trap arises . for a systematic investigation of the set of regressorsfor which the zero - power trap occurs see .as already mentioned in section [ main ] , the first and third claim in mt1 are correct , but the proof of these statements as given in is not ( cf . also ) .to explain the mistake , we assume for simplicity that is gaussian .with this additional assumption the model satisfies all the requirements imposed in , page 154 ( cf .remark [ modelm10 ] above ) .the proof of mt1 in is given for arbitrary and .set for simplicity . in the proof of mt1it is argued that the density of tends , as , to a degenerate `` density '' which is supported on a set that simplifies to the eigenspace of corresponding to its smallest eigenvalue in the case considered here .however , for ,the density of is we have in view of the assumption .furthermore , ( even uniformly on compact subsets ) .therefore the density converges to zero everywhere ( and even uniformly on compact subsets ) .in particular , it does not tend to a degenerate `` density '' supported on the eigenspace of corresponding to its smallest eigenvalue in any suitable way .note that does also not converge weakly as as the sequence for any is obviously not tight .this shows that the proof in is incorrect .furthermore , the concentration effect discussed after the theorem in simply does not occur in the way as claimed .in fact , the direct opposite happens : the distributions stretch out , i.e. , all of the mass `` escapes to infinity '' .we next turn to the second claim in mt1 and show by two simple counterexamples that this claim is not correct . or , a case ruled out in the main body of . ]the first example below is based on the following simple observation : suppose , the testing problem satisfies all assumptions of mt1 , and we can find an invariant rejection region of size , , with .the ( correct ) first claim of mt1 then implies that the limiting power of is .now define and observe that is again invariant and that and have the same rejection probabilities as they differ only by a -null set and the family is dominated by under the assumptions in .now holds , but the limiting power of is obviously .a concrete counterexample is as follows : [ ex1 ] assume that the elements of the family are gaussian , i.e. , satisfies assumption [ asdr ] with a standard normally distributed vector ( and without loss of generality we may set ) . for simplicitywe consider the case without regressors ( i.e. , we assume and thus holds by our conventions ) .let every where is normalized .clearly , is symmetric and positive definite for and holds .observe that , and thus , which has rank .the family hence clearly satisfies all the assumptions for mt1 imposed in , cf .remark [ modelm10 ] in section [ framework ] above .now fix an arbitrary and choose a rejection region that is ( i ) invariant w.r.t . , ( ii ) satisfies ( and thus for every by -invariance ) , and ( iii ) . [ for example , choose equal to a spherical cap on the unit sphere centered at such that has measure under the uniform distribution on , and set . ] from remark [ rmt](i ) we obtain that the limiting power of is . [ the assumptions of theorem [ mt ] are obviously satisfied in view of lemma [ convcm ] and proposition [ aspexii ] . ]we now define a new rejection region .clearly , is also -invariant , and and have the same rejection probabilities since is an -null set ( as we have assumed ) and the elements of are absolutely continuous w.r.t . . however , now holds , showing that the second claim in mt1 is incorrect . a similar example , starting with the rejection region , where is as before , and then passing to provides an example where holds , but the limiting power is zero . the argument underlying this counterexample works more generally for any covariance model that satisfies the assumptions of theorem [ mt ] , and thus , in particular , for spatial models .while the rejection region constructed in the preceding example certainly provides a counterexample to the second claim in mt1 , one could argue that it is somewhat artificial since can be modified by a -null set into the rejection region which does not have on its boundary .one could therefore ask if there is a more genuine counterexample to the second claim of mt1 in the sense that the rejection region in such a counterexample can not be modified by a -null set in such a way that the modified region does not have on its boundary .this is indeed the case as shown by the subsequent example .[ ex2 ] consider the same model as in the previous example , except that we now assume and is given by where with a strictly monotone and continuous function on satisfying and .again is symmetric and positive definite for and holds . observe that holds and thus where .obviously , has rank .again the family satisfies all the assumptions for mt1 imposed in .consider the rejection region which is -invariant .the rejection probability under the null is always equal to .furthermore , holds ( and obviously there is no modification by a -null set such that ) .we next show that converges to for under a suitable choice of the function : by -invariance , denotes the gaussian measure on with mean zero and variance covariance matrix .now for fixed , , we have that because of strict monotonicity of .furthermore, converges to weakly , and because is now obvious that if converges to sufficiently slowly , we can also achieve that ( [ help ] ) converges to as .furthermore , we also conclude that the invariant rejection region , which also has rejection probability under the null , provides an example where holds , but the limiting power is zero . similar counterexamples to the second claim in mt1 can also be constructed when regressors are present ( except if ).the case is somewhat trivial as we now explain : if , every -invariant test is -almost everywhere constant .[ to see this observe that is a -null set and that every element of is of the form for a fixed vector and hence holds whenever , i.e. , whenever .additionally , note that is constant on . ]consequently , has a constant power function if ( i ) the family of probability measures in ( [ family ] ) is absolutely continuous w.r.t . , or if ( ii ) this family is an elliptically symmetric family ( to see this in case use the argument given in remark [ rbt](iv ) ; in case combine this argument with remark [ rem_gen_1](vi ) ) . in particular , if is non - randomized , it is then a trivial test in that its size and power are either both zero or one , provided ( i ) holds or ( ii ) holds with . ]in this section we comment on problems in some results in that have not been discussed so far .we also discuss if and how these problems can be fixed . herewe discuss problems with lemmata d.2 and d.3 in martellosio ( 2010 ) which are phrased in a spatial error model context .correct versions of these lemmata , which furthermore are also not restricted to spatial regression models , have been given in section [ alpha_star ] above .both lemmata in concern the quantity , which is defined on p. 165 of as follows : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ `` for an exact invariant test of against in a sar(1 ) model , is the infimum of the set of values of ] , where denotes some multivariate distribution with mean and variance matrix . when an invariant critical region for testing against is in form ( 9 ) [ i.e. , is of the form for some univariate test statistic ] , and is such that is not contained in its boundary , . ''_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the statement of this lemma as well as its proof are problematic for the following reasons : 1 .the lemma makes a statement about , which is a quantity that depends not only on one specific critical region , but on a _ family _ of critical regions corresponding to a _family _ of critical values against which the test statistic is compared .the critical region usually depends on and so does its boundary ( cf .proposition [ prop_bound ] ) .therefore , the assumption `` ... is not contained in its [ the invariant critical region s ] boundary ... '' has little meaning in this context as it is not clear to which one of the many rejection regions the statement refers to .[ alternatively , if one interprets the statement of the lemma as requiring not to be contained in the boundary of _ every _ rejection region in the family considered , this leads to a condition that typically will never be satisfied . ] 2 .the proof of the lemma is based on corollary 1 in , the proof of which is incorrect as it is based on the incorrect theorem 1 of .the proof implicitly uses a continuity assumption on the cumulative distribution function of the test statistic under the null at the point which is not satisfied in general .next we turn to lemma d.3 in , which reads : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ `` consider a test that , in the context of a spatial error model with symmetric , rejects for small values of a statistic , where is an known symmetric matrix that does not depend on , and is as defined in section 2.2 .provided that , if and only if , and if and only if . ''_ _ _ _ _ _ _ _ _ _ _ _ __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ here refers to the size of the test , is given by for some fixed , and is not explicitly defined , but presumably denotes a rejection region corresponding to the test statistic . [although the test statistic is not defined whenever , this does not pose a severe problem here since considers only absolutely continuous distributions and since he assumes ; cf remark effect_on_boundary .note furthermore that the factor is irrelevant here . ]furthermore , ( ) denotes the eigenspace corresponding to the smallest ( largest ) eigenvalue of , and in stands for .the statement of the lemma and its content are inappropriate for the following reasons : 1 .the proof of this lemma is based on lemma d.2 of which is invalid as discussed above .2 . again , as in the statement of lemma d.2 of , the author assumes that ` ... ... ' , which is not meaningful , as the boundary typically depends on the critical value .3 . the above lemma in to be symmetric ( although this is actually not used in the proof ) .nevertheless , it is later applied to nonsymmetric weights matrices in the proof of proposition 1 in . as a point of interestwe note that naively applying lemma d.3 in mart10 to the case where is a multiple of the identity matrix leads to the contradictory statement .however , in case is a multiple of , the test statistic degenerates , and thus the size of the test is or , a case that is ruled out in from the very beginning .proposition 1 in considers the pure sar(1 ) model , i.e. , is assumed .this proposition reads as follows : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ `` consider testing against in a pure sar(1 ) model .the limiting power of the cliff - ord test [ cf .( [ cot ] ) below ] or of a test ( 8) [ cf .( [ lrt ] ) below ] is irrespective of [ the size of the test ] if and only if is an eigenvector of . ''_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ we note that , while not explicit in the above statement , it is understood in that is assumed . similarly , the case is not ruled out explicitly in the statement of the proposition , but it seems to be implicitly understood in that holds ( note that in case the test statistics degenerate and therefore the associated tests trivially have size equal to or , depending on the choice of the critical value ) .the test defined in equation ( 8) of rejects for small values of is specified by the user .the argument in the proof of the proposition in for this class of tests is incorrect for the following reasons : 1 .the proof is based on lemma d.3 in which is incorrect as discussed in appendix [ a1 ] .2 . even if lemma d.3 in were correct and could be used , this lemma would only deliver the result which does _ not _ imply , without a further argument , that the limiting power is equal to one for every size . by definition of , only implies that the limiting power is _ nonzero _ for every size . for the case of the cliff - ord test ,i.e. , the test rejecting for small values of argues that this can be reduced to the previously considered case , the proof of which is flawed as just shown .apart from this , the reduction argument , which we now quote , has its own problems : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ `` ... by lemma d.3 with [ which equals , in order to prove that the limiting power of test ( 8) [ cf .( [ lrt ] ) above ] is for any [ the size of the test ] , we need to show that is necessary and sufficient for . clearly , if this holds for any , it holds for too , establishing also the part of the proposition regarding the cliff - ord test . ... '' _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the problem here is that it is less than clear what the precise mathematical `` approximation '' argument is .if we interpret it as deriving limiting power equal to for the cliff - ord test from the corresponding result for tests of the form ( 8) and the fact that the cliff - ord test emerges as a limit of these tests for , then this involves an interchange of two limiting operations , namely and , for which no justification is provided .alternatively , one could try to interpret the `` approximation '' argument as an argument that tries to derive from for every ; of course , such an argument would need some justification which , however , is not provided .we note that this argument could perhaps be saved by using the arguments we provide in the proof of proposition [ e4new ] , but the proof of our correct version of proposition 1 in , i.e. , proposition [ prop1new ] in section [ sem ] , is more direct and does not need such a reasoning .furthermore , note that the proof of proposition [ prop1new ] is based on our proposition [ d3new ] , which is a correct version of lemma d.3 in and which delivers not only the conclusion , but the stronger conclusion that the limiting power is indeed equal to for every size in .we now turn to a discussion of lemma e.4 , which is again a statement about the cliff - ord test and tests of the form ( 8) in , but now in the context of the sem ( i.e. , is possible ) .the statement and the proof of the lemma suffer from the following shortcomings ( again lemma e.4 implicitly assumes that holds ) : 1 .the proof of the lemma is based on lemma d.3 in , which is incorrect ( cf .the discussion in appendix [ a1 ] ) .the proof uses non - rigorous arguments such as arguments involving a ` limiting matrix ' with an infinite eigenvalue .additionally , continuity of the dependence of eigenspaces on the underlying matrix is used without providing the necessary justification .3 . for the case of the cliff - ord test the same unjustified reduction argument as in the proof of proposition 1 of used , cf . the preceding discussion . for a correct version of lemma e.4 of proposition e4new in section [ sem ] above . as a point of interestwe furthermore note that cases where the test statistics become degenerate ( e.g. , the case ) are not ruled out explicitly in lemma e.4 in ; in these cases ( and not ) holds .the proof of the part of proposition 3 of regarding point - optimal invariant tests seems to be correct except for the case where is contained in one of the eigenspaces of . in this casethe test statistic of the form ( 8) in is degenerate ( see section [ idsp ] above ) and does not give the point - optimal invariant test ( except in the trivial case where the size is or , a case always excluded in ) . however , this problem is easily fixed by observing that the point - optimal invariant test in this case is given by the randomized test , which is trivially unbiased .two minor issues in the proof are as follows : ( i ) lemma e.3 can only be applied as long as for every . fortunately , the complement of this event is a null - set allowing the argument to go through .( ii ) the expression ` stochastically larger ' in the paragraph following ( e.4 ) should read ` stochastically smaller ' .we also note that the assumption of gaussianity can easily be relaxed to elliptical symmetry in view of -invariance of the tests considered .more importantly , the proof of the part of proposition 3 of concerning locally best invariant tests is highly deficient for at least two reasons : first , it is claimed that locally best invariant tests are of the form ( 7 ) in with .while this is correct under regularity conditions ( including a differentiability assumption on ) , such conditions are , however , missing in proposition 3 of . also , the case where is contained in one of the eigenspaces of has to be treated separately , as then the locally best invariant test is given by the randomized test .second , the proof uses once more an unjustified approximation argument in an attempt to reduce the case of locally best invariant tests to the case of point - optimal invariant tests .it is not clear what the precise nature of the approximation argument is .furthermore , even if the approximation argument could be somehow repaired to deliver unbiasedness of locally best invariant tests , it is less than clear that strict unbiasedness could be obtained this way as strict inequalities are not preserved by limiting operations. we next turn to the part of proposition 4 of regarding point - optimal invariant tests . holds .] as in the case of proposition 3 discussed above , the case where is contained in one of the eigenspaces of has to be treated separately , and gaussianity can be relaxed to elliptical symmetry .we note that the clause ` if and only if ' in the last but one line of p. 185 of mart10 should read ` if ' . we also note that the verification of the first displayed inequality on p. 186 of could be shortened ( using lemma e.3 ( more precisely , the more general result referred to in the proof of this lemma ) with , , and to conclude that the first display on p. 186 holds almost surely , and furthermore that it holds almost surely with equality if and only if all or all are equal , which is equivalent to all for being equal ) . again , the proof of the part of proposition 4 of concerning locally best invariant tests is deficient as it is based on the same unjustified approximation argument mentioned before .we next turn to proposition 5 of . in the last of the three cases considered in this proposition ,both test statistics are degenerate and hence the power functions are trivially constant equal to or ( a case ruled out in ) .more importantly , the proof of proposition 5 is severely flawed for several reasons , of which we only discuss a few : first , the proof makes use of corollary 1 of , the proof of which is based on the incorrect theorem 1 in ; it also makes use of lemma e.4 and proposition 4 of which are incorrect as discussed before .second , even if these results used in the proof were correct as they stand , additional problems would arise : lemma e.4 only delivers , and not the stronger conclusion that the limiting power equals , as would be required in the proof .furthermore , proposition 4 has gaussianity of the errors as a hypothesis , while such an assumption is missing in proposition 5 .we conclude by mentioning that a correct version of the part of proposition 5 of concerning tests of the form ( 8) in can probably be obtained by substituting our corollary [ cor1new ] and proposition [ e4new ] for corollary 1 and lemma e.4 of in the proof , but we have not checked the details . for the cliff - ord test this does not seem to work in the same way as the corresponding case of proposition 4 of is lacking a proof as discussed before .* proof of lemma [ convcm ] : * let be a sequence in converging to and let a spectral decomposition of , with ( ) forming an orthonormal basis of eigenvectors of and for denoting the corresponding eigenvalues ordered from smallest to largest and counted with their multiplicities . because is rank - deficient by assumption , we must have , or equivalently . because the kernel of has dimension one and because of positive definiteness of we can infer the existence of some such that must hold for every and . as a consequence , the sum from to in the previous display ,after being premultiplied by , converges to zero for .it remains to show that .let be an arbitrary subsequence of . by norm - boundedness of the sequence there exists another subsequence along which converges to some normalized vector , say .clearly left hand side in the previous display now converges to while the right hand side converges to zero .therefore is an element of the ( one - dimensional ) kernel of .since is normalized , we must have .this proves the claim as the subsequence was arbitrary . [ convpm ]let be a sequence of random -vectors such that and and let .if as for some , then the sequence is tight and the support of every weak accumulation point of the sequence of distributions of is a subset of .if , in addition , every weak accumulation point of the distributions of has no mass at the origin and if is normalized , then the distribution of converges weakly to . *proof : * let be an arbitrary positive real number . since the sequence is convergent to , it is bounded from above by , say .inequality gives every , which implies tightness . to prove the claim about the support of weak accumulation points note that and that the support of is certainly a subset of , which is a closed set .it thus suffices to show that converges to zero in probability .but this is again a consequence of markov s inequality : for every we have , we obtain and hence the upper bound in ( markov ) converges to zero as . to prove the final assertion let be an arbitrary subsequence and a subsequence thereof such that converges weakly to , say . by what has already been established , we may assume that almost surely holds . because is continuous at for every andbecause by the assumptions , we can apply the continuous mapping theorem to conclude that , we have that is almost surely equal to . butthis is almost surely equal to by definition of .this completes the proof because was an arbitrary subsequence . * proof of proposition [ aspexii ] : * 1 .let be a sequence converging to .assumption [ asdr ] implies that coincides with , which is precisely the distribution of . by assumption [ asc ]we have . by continuity of the symmetric nonnegative definite square root we obtain, converges weakly to .hence , the only accumulation point , say , of is the distribution of .the claim now follows because by assumption \2 .let be as before and observe that again coincides with , which , however , now equals the distribution of . since is a square root of , there must exist an orthogonal matrix such that . rewrite as .fix an arbitrary subsequence of . along a suitable subsubsequence matrix converges to an orthogonal matrix , say .therefore converges to .hence , the only accumulation point , say , of along the subsequence is the distribution of .but clearly .now this is equal to in case the distribution of is dominated by since the set is obviously a -null set . since was arbitrary , the proof of the first claim is complete . to prove the second claim observe that , which equals zero since the distribution of is dominated by by assumption and since is a -null set ( cf . remark [ e1](i ) ) . * proof of theorem [ mt ] : * let be a sequence in converging to .invariance of the test w.r.t . implies the last but one equality holds because of remark [ mi](ii ) .the covariance matrix of , say , a centered random variable with distribution , is given by which converges to by assumption [ asc ] .note that is necessarily normalized .by assumption [ asd ] every weak accumulation point of satisfies ( note that is in fact tight by lemma [ convpm ] ) .thus we can apply lemma [ convpm ] to conclude that as .since is bounded and is continuous at , the claim then follows from a version of the portmanteau theorem , cf .theorem 30.12 in . * proof of proposition [ prop_bound ] : * 1 . because we can find and . by -invariancewe have that and for every and for every . letting to zero we see that belongs to the closure of as well as of its complement . thus holds for every .suppose is an element of the boundary of the rejection region . if there is nothing to prove .hence assume .if would hold , then by the continuity assumption would be either in the interior or the exterior ( i.e. , the complement of the closure ) of the rejection region .because is continuous on , part 2 of the proposition establishes that the l.h.s . of ( [ char_bd ] )is contained in the r.h.s .because of part 1 , it suffices to show that every satisfying belongs to .obviously , .it remains to show that can be approximated by a sequence of elements belonging to : for set where is such that .such an exists , because by assumption .furthermore , must hold , since otherwise would follow , which in turn would entail for all , i.e. , , contradicting the assumptions .set and note that and hold .now a sequence that converges to zero for and satisfies for all if and for all if .then converges to and holds for large enough .furthermore , we have . butthis means that holds for large . * proof of lemma [ ar1 ] : * suppose assumption [ ascii ] holds. then clearly .set .furthermore , the above relation clearly implies and hence . because if and only if , and because , it follows that must be one - dimensional . hence must hold . since maps into in view of ( [ eq_79 ] ), it follows that is injective on . to prove the converse , note that given by ( [ scaled_limit_2 ] ) is by construction a bijection from to itself and is symmetric and nonnegative definite .thus its symmetric nonnegative definite square root exists and is a bijective map from to itself . furthermore ,the symmetric nonnegative square root of can be written in the form for a suitable choice of an orthogonal matrix . by continuity of the symmetric nonnegative square rootwe obtain remains to set and . * proof of theorem [ bt ] : * a.1 . by -invariance of andassumption [ asdr ] the power function does neither depend on nor ( cf .remark [ invariance ] ) , and thus it suffices to consider the case and . by assumption [ asdr ] we furthermore have is an orthogonal matrix .observe that holds for every and for every : this is trivial for and follows for from we have made use of -invariance of as well as of ( [ inv ] ) . observing that as well as belong to , using relation ( [ identity ] ) as well as -invariance of leads to is shorthand for .since the image of is and is injective when restricted to it follows that is bijective as a map from to . [ to see this suppose that . because this implies as well as .the first equality now implies .bijectivity of on then implies .] by assumption [ ascii ] the matrix converges to for and thus is bijective as a map from to whenever is sufficiently close to , say .if now is an accumulation point of , we can find a sequence that converges to such that converges to . by passing to a suitable subsequence, we may also assume that converges to an orthogonal matrix , say .we may furthermore assume that holds and thus is nonsingular . by the transformation formula for densities the -density of the random vector given by of , , and because is continuous -almost everywhere , this expression converges for -almost every to , which is the density of the random vector .scheff s lemma thus implies that the distribution of converges in total variation norm to , the distribution of .it now follows in view of ( [ expect_1 ] ) and ( [ expect_2 ] ) that because of ( [ identity ] ) , implying .this shows that must hold .conversely , given we can find a sequence such that converges to the given . repeatingthe argument given above then shows that , for the given , arises as an accumulation point of for .the claim follows immediately from the already established part 1 .recall that . is the pushforward measure of ( restricted to ) under the map .now is nothing else than the product of the measure on with density and the surface measure on with the constant given by ( cf . ) . in view of fubini s theorem( observe all functions involved are nonnegative ) and invariance of we then obtain is not equal to zero -almost everywhere , then so is because is nonsingular .now scale invariance of translates into scale invariance of , and hence restricted to is not equal to zero -almost everywhere , cf .remark [ e1](i ) in appendix [ auxil ] .since the inner integral in the preceding display is positive -almost everywhere by the assumption on , we conclude that must be positive .the claim that is proved by applying the above to . hence, if is neither -almost everywhere equal to zero nor -almost everywhere equal to one , we have established that is strictly between and .next observe that is a compact set .it thus suffices to establish that the map is continuous on .but this follows from ( [ dens ] ) , -almost sure continuity of , and scheff s lemma .b. by the assumptions on the random vector is spherically symmetric with , and hence is almost surely equal to where is a random variable satisfying and where is independent of and is uniformly distributed on the unit sphere ( cf .lemma 1 in ) .possibly after enlarging the underlying probability space we can find a random variable which is independent of and which is distributed as the square root of a chi - square with degrees of freedom . by -invariance of have has a standard multivariate gaussian distribution . again using -invariance of similarly obtain denotes the distribution of .this shows that we may act as if were gaussian .consequently , the results in a.1-a.3 apply . furthermore , under elliptical symmetry holds for every orthogonal matrix .hence , there exists only one accumulation point which is given by .[ alternatively , under elliptical symmetry we may choose w.l.o.g . to be any square root of , and thus equal to , and then apply part a.2 . ] that holds under the additional assumption on follows from part a.3 . [ off - diag ] suppose assumptions [ asc ] and [ ascii ] hold with the same vector .then bounded for and the set of all accumulation points of for is given by same statements hold if is replaced by or .* proof : * rewrite as where , is orthogonal , and .now and converge to and , respectively , by assumptions [ asc ] and [ ascii ] .since is clearly bounded , boundedness of follows .the claim concerning the set of accumulation points also now follows immediately .the proofs for and are completely analogous . * proof of theorem [ bt2 ] : * 1 . using invariance w.r.t . , equation ( [ ataylor ] ) , and homogeneity of we obtain for every every .let be an accumulation point of for .then we can find a sequence with along which the rejection probability converges to .( possibly after passing to a suitable subsequence ) we may also assume that along this sequence the orthogonal matrices and converge to orthogonal matrices and , respectively . using and invariance w.r.t . we obtain that is nonzero with probability because , is nonsingular , and possesses a density .hence , combining the previous display and equation ( taylor ) with and and then multiplying by , where is shorthand for , we obtain that almost surely .next observe that by assumption [ asc ] , continuity of the symmetric nonnegative definite square root , and we have the convergence holds for every realization of .note that holds almost surely .relation ( conv_1 ) together with assumption [ ascii ] then implies that the first term on the r.h.s. of ( [ rep ] ) converges almost surely to since is clearly continuous .we next show that the second term on the r.h.s . of ( [ rep ] )converges to zero almost surely : let denote the argument of in ( [ rep ] ) . fix a realization of such that .then is well - defined for large enough , and it converges to zero because of ( [ conv_1 ] ) and ( [ conv_2 ] ) .since holds as a consequence of ( [ ataylor ] ) , we only need to consider subsequences along which . for notational conveniencewe denote such subsequences again by . because of the assumptions on it suffices to show that is bounded .now we have made use of ( [ conv_1 ] ) and assumption [ ascii ] .we have thus established that surely .note that the range of is , and that is bijective as a map from to itself .hence , the random variable takes its values in and possesses a density on this subspace ( w.r.t . dimensional lebesgue measure on this subspace ) .since restricted to can be expressed as a multivariate polynomial ( in variables ) and does not vanish identically on , it vanishes at most on a subset of that has -dimensional lebesgue measure zero .it follows that , and hence the limit in ( [ conv_3 ] ) , is nonzero almost surely .observe that and are positive . by an application of the portmanteau theorem we can thus conclude from ( [ conv_3 ] ) that for limit in the preceding display obviously reduces to ( [ even ] ) and ( [ odd ] ) , respectively , and clearly implies .this together then proves that every accumulation point has the claimed form . to prove the converse ,observe first that for every we can find an such that holds ( exploiting compactness of the set of orthogonal matrices ) .now , let be given .then we can find a sequence with such that and converge to and , respectively .repeating the preceding arguments , then shows that is the limit of .the final claim is now obvious .\2 . if is an elliptically symmetric family we can w.l.o.g .set , implying that reduces to .furthermore , as is then spherically symmetric and satisfies , it is almost surely equal to where must satisfy and where is independent of and is uniformly distributed on the unit sphere in .let be a random variable which is independent of and which is distributed as the square root of a chi - square with degrees of freedom ( this may require enlarging the underlying probability space ) and define which clearly is a multivariate gaussian random vector with mean zero and covariance matrix . define in the same way as , but with replacing in assumption [ asdr ] .observe that the rejection probabilities of the test considered are the same whether they are calculated under the experiment or because of -invariance of the test statistic . applyingthe already established part 1 in the context of the experiment thus shows that the accumulation points of the rejection probabilities calculated under as well as under equal for even and equal for odd . in view of homogeneity of and the fact that as well as are almost surely positive , these probabilities do not change their value if we replace by .this proves ( [ even2 ] ) and ( [ odd2 ] ) . to prove the last but one claim observe that , and are independent .hence the accumulation point can be written as reduces to , because then obviously ( note that ) and because ( which is proved by arguments similar to the ones given below ( [ conv_3 ] ) ) . the final claim follows because by the assumed symmetry , the last equality following from the definition of . \3 .lemma [ off - diag ] shows that under the additional assumption we have for every , and hence .the claim then follows from part 2 . [ t]suppose is a test statistic that satisfies the conditions imposed on in theorem [ bt2 ] for some normalized vector . then : 1 . holds for every .in particular , vanishes on all of .if holds for every with , then there exists a neighborhood of in such that holds for every in that neighborhood . *proof : * 1 .write as with .then for every sufficiently small real we have , and hence exploiting -invariance of we obtain ( [ ataylor ] ) to both sides of the above equation , using homogeneity of , and dividing by we arrive at observe that is zero for , and converges to zero for for .a similar statement holds for as well . since , we obtain which proves the first claim .the second claim is then an immediate consequence since by homogeneity .suppose the claim were false .we could then find a sequence with .rewrite as with .clearly , would have to hold , implying for all sufficiently large .using -invariance we obtain for all large .in particular , we conclude that would have to hold for all large . applying ( [ ataylor ] ) to the r.h.s . of the preceding equation we thus obtain for all large homogeneity of then have for all large that is an element of the compact set on which is continuous and negative .hence , the r.h.s . of the preceding displayis eventually bounded from above by zero , a contradiction . inspection of the proof of part 1 of the preceding lemma shows that this proof in fact does not make use of the property that does not vanish on all of . *proof of corollary [ lem_illust_1 ] : * 1 .clearly implies in view of the definition of . in view of the assumption on ,the rejection region satisfies .consequently implies , cf .proposition [ prop_bound ] . but clearly holds , implying that .the result then follows immediately from theorem mt combined with the observation that is continuous at if and only if .since by assumption , we conclude similarly as above that implies . but clearly holds , implying that . as before, the result then follows from theorem [ mt ] . * proof of corollary [ lem_illust_2 ] : * observe that ( [ inv ] ) is satisfied for since is -invariant and by assumption .hence , all assumptions of part b of theorem [ bt ] are satisfied and thus the existence and the form of the limit follows . if the test is neither -almost everywhere equal to zero nor -almost everywhere equal to one , whereas is -almost everywhere equal to one if as discussed in remark [ range_for_kappa ] .part b of theorem [ bt ] and remark [ rbt](iv ) then deliver the remaining claims . * proof of corollary [ lem_illust_3 ] : * all assumptions for part 2 of theorem [ bt2 ] ( including the elliptic symmetry assumption ) except for ( [ ataylor ] ) are obviously satisfied .we first consider the situation of part 1 of the corollary : that follows immediately from and the definition of .furthermore , it was shown in example [ ex_quad ] that ( [ ataylor ] ) holds with and given by ( [ d_2 ] ) , and that satisfies all conditions required in theorem [ bt2 ] . applying the second part of theorem [ bt2 ] with immediately gives ( [ even_special ] ) .furthermore , observe that is nonsingular ( cf .the proof of theorem bt ) . by the general assumptions we have .if now holds , we see that the matrix in ( [ matrix ] ) is not equal to the zero matrix and is indefinite .consequently , the r.h.s . of ( [ even_special ] ) is strictly between zero and one . in case the matrix in ( [ matrix ] ) is again not equal to the zero matrix , but is now nonnegative definite , which shows that the r.h.s . of ( [ even_special ] )equals .next consider the situation of part 2 of the corollary : as shown in example [ ex_quad ] , now condition ( [ ataylor ] ) holds with and given by ( [ d_1 ] ) , and satisfies all conditions required in theorem bt2 . applying the second part of theorem [ bt2 ] now with then immediately gives ( [ odd_special ] ) .the claim regarding ( [ odd_special ] ) falling into then follows immediately from remark [ rem_accu](iii ) , while the final claim follows from this in conjunction with remark [ rem_accu](ii ) .the claim in parenthesis follows from the second part of theorem [ bt2 ] and the following observation : note that implies that are orthogonal .furthermore , since the matrix in parentheses in the definition of does not vanish on all of ( see example [ ex_quad ] ) . since also , we conclude that and are not collinear .finally , part 3 of the corollary follows immediately from part 3 of theorem [ bt2 ] observing that as shown by example [ ex_quad ] . * proof of lemma [ d2new ] : * let be a real number such that and .then and hold , implying that .theorem [ mt ] and remark [ rmt ] then entail . if but the same conclusion can be drawn since for .therefore , we have for every .next , let be a real number such that and hold .this implies and , and hence .theorem [ mt ] and remark [ rmt ] now give for those values of .monotonicity of w.r.t . shows that this relation must hold for all . from ( [ astar ] ) and the just established results we obtain is precisely one minus the cumulative distribution function of , and hence is continuous at by assumption .since it is clearly also decreasing in , we may conclude that note that the claim in parenthesis is an immediate consequence of the second part of proposition [ prop_bound ] . [ dt ] suppose that is a probability measure on which is absolutely continuous w.r.t .let be given by ( [ t_quadratic ] ) . 1 .then the support of is contained in ] .* proof : * 1 .observe that the image of under the map is ] , implying that the support of is contained in the same interval .[ we note for later use that the range of actually coincides with all of ]it suffices to show that is equal to zero for every .note that since is absolutely continuous w.r.t . and holds .consequently , we have for every show that it suffices to show that .the set under consideration is obviously an algebraic set .hence , it is a -null set if we can show that the quadratic form in the definition of this set does not vanish everywhere .suppose the contrary , i.e. , for every would hold . because is surjective , for every would have to hold .since is symmetric , this would imply , contradicting .this establishes .\2 . if this is trivial .hence assume .let be an element in the interior of ] .let be such that .such an exists , because the range of is all of ] .thus is equivalent to the cumulative distribution function of under being equal to one when evaluated at .lemma [ dt ] implies that this is in turn equivalent to ( since is clearly impossible ) .but is clearly equivalent to .this proves the first claim of part 1 .next observe that for every the assumptions on together with part 2 of lemma [ dt ] imply .the second claim then follows from lemma [ d2new ] .for the claim in parenthesis see remark [ range_for_kappa ] .\2 . by the same reasoning as in the proof of part 1we see that is then equivalent to .since by assumption , this is in turn equivalent to .this proves the first claim of part 2 .the second claim follows directly from lemma [ d2new ] because holds for in the specified range in view of lemma dt and the assumptions on .the remaining claims follow from remark [ range_for_kappa ] .the first claim is obvious in light of parts 1 and 2 , and the remaining claims follows from lemma [ dt ] and lemma [ d2new ] . * proof of proposition [ d4new ] : * the test is obviously invariant w.r.t . , and the additional invariance condition ( [ inv ] ) in theorem [ bt ] is satisfied because of .if , the rejection region as well as its complement have positive -measure , whereas is or the complement of a -null set in case , see remark [ range_for_kappa ] .the second claim then follows from theorem bt , part a.3 , and remark [ rbt](i ) in case , and is obvious otherwise .now , the just established claim implies that is not larger than . since the set is a -null set ( as is assumed ) and since is absolutely continuous by the assumptions of the lemma , then follows .the claim in parentheses is trivial . [ auxid ]let be a symmetric positive definite matrix and let .then , the following statements are equivalent : \(i ) for some matrix satisfying and , \(ii ) for any matrix satisfying and , \(iii ) , \(iv ) there exists a matrix such that and holds . * proof : * that ( i ) , ( ii ) and ( iii ) are equivalent is obvious from the relations and .that ( iv ) implies ( iii ) is obvious . to see that ( iii ) implies ( iv ) , note that is symmetric and idempotent , thus in other words , and are both square roots of the same matrix , which implies existence of an orthogonal matrix , say , such that then completes the proof . * proof of theorem [ i d ] : * 1 .clearly , as is positive definite , we must have with . by lemma [ auxid ], there exists an matrix such that and .since is a square root of there exists an orthogonal matrix such that .now observe that immediately gives the last equality in ( [ eq0ip ] ) . now ,if , then we can use the equation in the previous display to obtain , then also in view of ( [ eq1ip ] ) . hence ,also in this case we obtain and .this proves part 1 .\2 . observe that under the assumption on the distribution of under is the distribution of ( upon choosing ) which coincides with the distribution of by the implied spherical symmetry of the distribution of .but clearly , the distribution of coincides with the distribution of by spherical symmetry and since implies that is an orthogonal matrix . in turn , the distribution of coincides with the distribution of under since , in particular , satisfies assumption [ asdr ] .this proves that . that can be proved in the same way observing that .[ alternatively , it follows immediately from -invariance and the fact that the distribution of does not depend on . ] the proofs for the corresponding statements regarding the distributions of and are analogous .since every other invariant statistic can be represented as a function of , , and , respectively , the second claim of part 2 follows .the third claim is now obvious .* proof of lemma [ isar1 ] : * suppose and set .this implies if inspection of the diagonal elements in ( [ repre ] ) shows that all diagonal elements of must be zero , which is only possible if since can not be the zero matrix .but then we arrive at and .now suppose would hold .then inspection of the diagonal elements in ( [ repre ] ) shows that the diagonal elements of are all identical equal to , say , and must satisfy , which can equivalently be written as , multiplying ( [ repre ] ) by from the left and by from the right and noting that holds , gives after a rearrangement from the last equation ( note that ) , and substituting into the last but one equation gives the function is obviously strictly increasing on since holds .this gives and consequently also would hold , a contradiction . * proof of lemma [ sarl ] : * clearly , and its kernel equals the kernel of which obviously contains and which is one - dimensional by the assumptions on .therefore the kernel equals , which together with lemma [ convcm ] proves the first claim . to prove the second claim we need to show that in the formulation of the lemma is well - defined , is injective when restricted to , and satisfies with .observe that for every we can find a such that holds .noting that is the spectral radius of by our assumptions on , we can conclude that for ( where denotes an arbitrary matrix norm ) , cf . , corollary 5.6.14 .but then it follows that can be written as the norm - convergent series for every .thus we obtain be an orthonormal basis of and define the matrix .then is an orthogonal matrix . set and observe that takes the form later use we note that is not an eigenvalue of since the eigenvalues of and coincide , since the eigenvalues of are made up of and the eigenvalues of , and because has algebraic multiplicity by assumption .now clearly for , which implies .consequently, that the infinite sum in the second line of ( [ proj_w_2 ] ) is norm - convergent because of ( [ neumann ] ) , and thus necessarily equals the inverse matrix in the last line of ( [ proj_w_2 ] ) . because is not an eigenvalue of in view of ( [ proj_w_j ] ) with ,the matrix is invertible , showing that is well - defined .furthermore , from ( [ proj_w_2 ] ) we see that ( [ claim_1 ] ) indeed holds . finally , is injective on since coincides with on this subspace . * proof of lemma [ sarl_2 ] : * the first claim is an obvious consequence of the maintained assumptions for the sem .the second claim follows from proposition [ aspexii ] together with the already established first claim , since assumption [ asc ] holds for the sem as shown in lemma [ sarl ] . * proof of corollary [ cor1new ] : * parts 1 - 3 follow from combining lemmata [ sarl ] , [ sarl_2 ] , theorem [ mt ] , remark [ rmt ] , theorem [ bt ] , and remark [ rbt](i ) , noting that here .part 4 is then a simple consequence of part 3 in view of proposition prop_bound , remark [ range_for_kappa ] , and remark [ rbt](iv ) ; cf .also the proof of corollary [ lem_illust_1 ] . * proof of corollary [ cor2new ] : * the first part follows immediately from part 1 of corollary [ lem_illust_3 ] .the second part follows from part 3 of the same corollary if we can verify that the additional condition assumed there is satisfied .first observe that is an eigenvector of to the eigenvalue and that is nonsingular for . because is symmetric , is then also an eigenvector of with eigenvalue .next observe that .but then we have for * proof of proposition [ prop1new ] : * lemmata [ sarl ] and sarl_2 show that assumptions [ asc ] and [ asd ] are satisfied . in view of the assumptions of the proposition , is clearly absolutely continuous w.r.t . with a density that is positive on an open neighborhood of the origin except possibly for a -null set , and is trivially satisfied since . obviously , is not a multiple of the identity matrix . [if it were , inspection of the diagonal elements shows that would have to be the zero matrix .however , this is also impossible since . ]also , can not be a multiple of the identity matrix in view of lemma [ isar1 ] .hence , in both cases we have . proposition [ d3new ] andthe observation that the rejection probabilities are monotonically decreasing in now establishes the first claim of the proposition .it remains to show that is equivalent to being an eigenvector of under the additional assumptions on .we may assume that is entrywise positive .we argue here similarly as in the proof of proposition 1 in .consider first the case where . if then which it follows that is an eigenvector of .conversely , if is an eigenvector of then is easily seen to be also an eigenvector of and hence also of .now , is an entrywise positive matrix by a result in , p. 69 .consequently , the eigenspace corresponding to its largest eigenvalue is one - dimensional and is spanned by a unique normalized and entrywise positive eigenvector , say .since is symmetric and is an entrywise positive eigenvector of , it must correspond to the largest eigenvalue of ( because otherwise it would have to be orthogonal to , which is impossible as and are both entrywise nonnegative ) .hence , . next consider the case .as before , implies hat is an eigenvector of .conversely , being an eigenvector of implies that is an eigenvector of . since is symmetric , entrywise nonnegative , and irreducible ( since is so ) the same argument as in the first case can be applied .* proof of proposition [ e4new ] : * as in the proof of proposition [ prop1new ] it follows that assumptions [ asc ] and [ asd ] are satisfied and that is absolutely continuous w.r.t . with a density that is positive on an open neighborhood of the origin except possibly for a -null set . by assumption holds .consider first case ( ii ) : observe that the eigenspaces of and are identical . by assumption [ asc ] we have for ,, limiting matrix being a matrix of rank exactly equal to since by the assumption .hence , its largest eigenvalue is positive and has algebraic multiplicity , while all other eigenvalues are zero .it follows from , p. 726 , lemma 2.1 , that then the eigenspace corresponding to the largest eigenvalue of ( and thus the eigenspace corresponding to the largest eigenvalue of ) converges to the eigenspace of the limiting matrix corresponding to its largest eigenvalue ( in the sense that the corresponding projection matrices onto these spaces converge ) .the latter space is obviously given by . because the eigenspaces of corresponding to the largest eigenvalue are independent of by assumption , it follows that these eigenspaces all coincide with .consequently , also holds for .in particular , follows , as has been assumed .the result now follows from the first part of proposition [ d3new ] .next consider case ( i ) : by assumption holds .hence we may apply the first part of proposition [ d3new ] and it remains to show that belongs to .now , observe that for , .because is an eigenvector of corresponding to its largest eigenvalue , say , as was shown above , it is also an eigenvector of corresponding to its largest eigenvalue , namely . because for , it follows that is an eigenvector of corresponding to the limit of , which necessarily then needs to coincide with the largest eigenvalue of . * proof of theorem [ theorem_splm ] : * observe that the covariance matrix of under is given by .now , for we have ^{1/2}u(\rho ) , \]]for a suitable orthogonal matrix . from lemmasarl we know that converges to as , .continuity and uniqueness of the symmetric square root hence gives ^{1/2}\rightarrow \left ( f_{\max } f_{\max } ^{\prime } \right ) ^{1/2}=f_{\max } f_{\max } ^{\prime } .\]]now , let , be an arbitrary sequence . then we can always find a subsequence such that along this subsequence converges to an orthogonal matrix .consequently , converges to . under random vector clearly has the same distribution as is a fixed random vector distributed according to the distribution of , which is independent of the parameters by assumption . observing that the random variable is almost surely nonzero by the assumption on the distribution of , the expression in the preceding display is now seen to converge in distribution as to is a random variable with values in .it then follows from the continuous mapping theorem that converges in distribution under to . in other words, converges weakly to pointmass .now observe that the r.h.s . of the preceding displayconverges to because converges weakly to pointmass and because is bounded and is continuous at , cf .theorem 30.12 in .a standard subsequence argument then shows that the limit of for , is as claimed .the second claim is an immediate consequence of the first one . * proof of lemma [ semid ] : * from ( [ eig ] ) we obtain . for thus obtain after transposition establishes the first claim .an immediate consequence of the first claim is establishes the second claim in view of lemma [ auxid ] .[ proj_1]let be a random -vector with a density , say , w.r.t .then is well - defined with probability and has a density , say , w.r.t .the uniform probability measure on .the density satisfies -almost everywhere , where .furthermore , if is positive on an open neighborhood of the origin except possibly for a -null set ( which is , in particular , the case if is positive -almost everywhere ) , then is positive -almost everywhere .* proof : * let be a borel set in and let be given by . then is the pushforward measure of ( restricted to ) under the map . but is nothing else than the product of the measure on with lebesgue density and the surface measure on where is given in the lemma ( cf . ) . in view of tonelli s theorem( observe all functions involved are nonnegative ) and since clearly holds for , we obtain establishes the claims except for the last one .we next prove the final claim .first , observe that for every borel set in we have if and only if .[ this is seen as follows : specializing what has been proved so far to the case where follows a standard gaussian distribution , shows that in this case is uniformly distributed on . hence , .but then the equivalence of the gaussian measure with establishes that if and only if .] let now satisfy .clearly , where is an open neighborhood of the origin on which is positive -almost everywhere .but then we must have , because follows as a consequence of as just shown above and because can be written as a countable union of the sets with . by the assumption on can now conclude that holds .hence , we have established that holds whenever is satisfied . \(ii ) let be a random -vector such that .assume that has a density w.r.t . ( which is , in particular , the case if is spherically symmetric ) .let be a -invariant borel set in with . then holds . to see this use -invariance and the fact that has no atom at the origin to obtain , where that is a borel subset of satisfying .hence holds .but then by what was shown in ( i ) . since possesses a density w.r.t . by assumption , we conclude that , and thus also must hold .\(iii ) let be as in ( ii ) and let be a -invariant borel set in with .then for every , , and every nonsingular matrix we have in view of ( ii ) since is a -invariant -null set .[ proj_2]let be a random -vector satisfying .then is well - defined with probability .assume further that the distribution of has a density , say , w.r.t .suppose is a random variable taking values in that is independent of and that has a density , say , w.r.t .define on the event and assign arbitrary values to on the event in a measurable way .then , the following holds : 1 . and for , .2 . possesses a density w.r.t .lebesgue measure which is given by has been given in lemma [ proj_1 ] .if is -almost everywhere continuous and is -almost everywhere continuous , then is -almost everywhere continuous .if is -almost everywhere positive and is -almost everywhere positive , then is -almost everywhere positive .if is constant -almost everywhere [ which is , in particular , the case if is spherically symmetric ] and if is distributed as the square root of a -distributed random variable with degrees of freedom , then is gaussian with mean zero and covariance matrix .* proof : * part 1 is obvious . to prove part 2 we denote the distribution of by and the distribution of by . because and are independent , the joint distribution of and on , equipped with the product -field , is given by the product measure .therefore , the distribution of is the push - forward measure of under the mapping .hence for every we have , using tonelli s theorem and the fact that and have densities and , respectively , that for the function is given by is clearly a non - negative and borel - measurable function , we can apply theorem 5.2.2 in stroock ( 1999 ) to see that establishes the second part of the lemma . to prove the third part denote by , and the discontinuity points of , , and , respectively , which are measurable . using part 2 of the lemma we see that , , and imply . therefore , negating the statement , we see that must hold which implies again theorem 5.2.2 in stroock ( 1999 ) we see that holds by assumption . similarly , we obtain the inner integral is zero as a consequence of the assumption that .together with equation ( [ dg * ] ) the last two displays establish . to prove part 4 denote by , , and the zero sets of , , and , respectively , which are obviously measurable .replacing , , and with , , and , respectively , in the argument used above then establishes part 4 . to prove the last part, we observe that being constant - almost everywhere implies that is uniformly distributed on . since is independent of , which is distributed as the square root of a with degrees of freedom , it is now obvious that is gaussian with mean zero and covariance matrix . as long as we are only concerned with distributional properties of we can assume w.l.o.g . that the probability space supporting is rich enough to allow independent random variables that have the required properties . in particular, we can then always choose such that the density is simultaneously -almost everywhere continuous and -almost everywhere positive ( e.g. , by choosing to follow a -distribution ) .
|
the behavior of the power function of autocorrelation tests such as the durbin - watson test in time series regressions or the cliff - ord test in spatial regression models has been intensively studied in the literature . when the correlation becomes strong , ( for the durbin - watson test ) and ( for the cliff - ord test ) have shown that the power can be very low , in fact can converge to zero , under certain circumstances . motivated by these results , set out to build a general theory that would explain these findings . unfortunately , does not achieve this goal , as a substantial portion of his results and proofs suffer from serious flaws . the present paper now builds a theory as envisioned in in a fairly general framework , covering general invariant tests of a hypothesis on the disturbance covariance matrix in a linear regression model . the general results are then specialized to testing for spatial correlation and to autocorrelation testing in time series regression models . we also characterize the situation where the null and the alternative hypothesis are indistinguishable by invariant tests . ams mathematics subject classification 2010 : 62f03 , 62g10 , 62h11 , 62h15 , 62j05 . keywords : power function , invariant test , autocorrelation , spatial correlation , zero - power trap , indistinguishability , durbin - watson test , cliff - ord test .
|
the design of computational methods for the numerical approximation of the stokes system of equations modelling creeping incompressible flow is by and large well understood in the case where the underlying problem is well - posed .indeed , provided suitable boundary conditions are set , the system of equations are known to satisfy the hypotheses of the lax - milgram lemma and brezzi s theorem ensuring well - posedness of velocities and pressure .these theoretical results then underpin much of the theory for the design of stable and accurate finite element methods for the stokes system . in many cases of interest in applications , however , the necessary data for the theoretical results to hold are not known ; this is the case for instance in data assimilation in atmospheric sciences or oceanography . instead of knowing the solution on the boundary, data in the form of measured values of velocities may be known in some other set .it is then not obvious how best to apply the theory developed for the well - posed case .a classical approach is to rewrite the system as an optimisation problem and add some regularization , making the problem well - posed on the continuous level and then approximate the well - posed problem using known techniques . for examples of methods using this framework see and . in this paperwe advocate a different approach in the spirit of .the idea is to formulate the optimization problem on the continuous level , but without any regularization .we then discretize the ill - posed continuous problem and instead regularize the discrete solution .this leads to a method in the spirit of stabilized finite element methods where the properties of the different stabilizing operators are well studied .an important feature of this approach is that it eliminates the need for a perturbation analysis on the continuous level taking into account the tikhonov regularization and perturbations in data , that the discretization error then has to match . in our casewe are only interested in the discretization error and the perturbations in data .this allows us to derive error estimates that are optimal in the case of unperturbed data in a similar fashion as for the well - posed case .we exemplify the theory in a model case for data assimilation where data is given in some subset of the computational domain instead of the boundary , and we obtain error estimates using a conditional stability result in the form of a three ball inequality due to lin , uhlmann , and wang . a particular feature of the method formulated for the integration of data in the bulk ( and not on the boundary ), is that the dual adjoint problem does not require any regularization on the discrete level .indeed , the adjoint equation is inf sup stable , similarly to the case of elliptic problems on non - divergence form discussed in .the rest of the paper can be outlined as follows .first , in section [ sec : stokes ] , we introduce the stokes problem that we are interested in and propose the continuous minimization problem. then , in section [ fem ] , we present the non - conforming finite element method and prove some preliminary results . in section [ theory ]we prove the fundamental stability and convergence results of the formulation .finally we show the performance of the approach on some numerical examples .let be a polygonal ( polyhedral ) domain in , or .we are interested in computing solutions to the stokes system \nabla \cdot u & = & \mathfrak{g } \quad \mbox { in } \omega . \end{array}\ ] ] typically these equations are then equipped with suitable boundary conditions and are known to be well - posed using the lax - milgram lemma for the velocities and brezzi s theorem for the pressures .it is also known that the following continuous dependence estimate holds , here given under the assumption of homogeneous dirichlet conditions on the boundary . where we used the notation and for with .observe that for any solution to the equations and in any closed ball there holds ^d \times h^1(b_r).\ ] ] provided ^d ] . observe that this is not a strong assumption for the particular problem we will study below , since the domain here is somewhat arbitrary and not necessarily determined by a physical geometry . indeed the only situation in which this assumption can fail is when the boundary of coincides with a physical boundary with a corner .herein the main focus will be on methods that allow for the accurate approximation of the solution under the much weaker stability estimates that remain valid in the case of ill - posed problems where fails .a situation of particular interest is the case where the boundary data is known only on a portion of and nothing is known of the boundary conditions on the remaining part .this lack of boundary information makes the problem ill - posed and we assume that some other data is known such as : * the normal stress in some part of the boundary and , we will refer to this problem as the _ cauchy problem _ below . * the measured value of in some subdomain .we will refer to this problem as the _ data assimilation problem _ below .in the first case it is known that if a solution exists , then implies , in by unique continuation , however , no quantitiative estimates appear to exist in the literature for the pure cauchy problem ; see for results using additional measurements on the boundary . in the second case stability may be proven in the form of a three balls inequality and associated local stability estimates , see . for completeness of the analysis we focus on the second case for the errorestimates below .in particular we consider the case where no data are known on the boundary , i.e. . in the data assimilation casethe following theorem from provides us with a conditional stability estimate . assuming an optimal conditional stability estimate for the cauchy problem in the spirit of , it is straightforward to extend the anaysis to this case following .[ thm:3sphere](conditional stability for the stokes problem ) there exists a positive number such that if and , then if for ^{d+1} ] is a perturbation of the exact data resulting from measurement error or interpolation of pointwise measurements inside .observe that the considered configuration is also closely related to a pure boundary control problem , where we look for data on the boundary such that in the subset .we will first cast the problem , with the notation and with , on weak form . for the derivation of the weak formulationwe introduce the spaces ^d \} ] for velocities and and , where the zero subscript in the second case as usual indicates that the functions have zero integral over .we may the multiply the first equation of by and first integrate over and then apply green s formula to obtain similarly we may multiply the second equation by and integrate over to get introducing the forms and we may formally write the problem as : find such that and observe that this problem is ill - posed .in particular observe that we are not allowed to test with because of the homogeneous dirichlet conditions set on the functions in .to regularize the problem we cast it on the form of a minimization problem , first writing : = a(u , w ) + b(p , w ) - b(y , u)\ ] ] and then introducing the lagrangian : = \frac12 m(u - \tilde u_m , u - \tilde u_m ) + a[(u , p),(z , x)]-l(z),\ ] ] where is a bilinear form that depends on what data we wish to integrate .for the data assimilation problem that is our main concern we simply have where is a free parameter .we will also use the notation the optimality system of the associated constrained minimization problem takes the form & = l(w ) \\a[(v , q),(z , x ) ] + m(u , v ) & = m(\tilde u_m , v ) .\label{eq : min2}\end{aligned}\ ] ] this problem is ill - posed in general , but in the data assimilation case we know that if a solution exists and then this solution must satisfy the conditional stability of theorem [ thm:3sphere ] .a consequence of this is that if the system admits a solution for the exact data , then this solution is unique . to show this assume that there are two solutions and that solve , then solves the homogenous stokes equation and has and the uniqueness is a consequence of unique continuation based on theorem [ thm:3sphere ] . belowwe will assume that there exists a unique solution ^d \times h^1(\omega) ] and for faces on the boundary , , we let \vert_f : = v \vert_f ] and ^d ] defined by the ( component wise ) relation for every and with denoting the -measure of .it is conventient to introduce the broken scalar product with the associated norms the following inverse and trace inequalities are well known h_{\kappa}\|\nabla v_h\|_{\kappa } + h_\kappa^{\frac12 } \|v_h\|_{\partial \kappa } \leq c_i \|v_h\|_\kappa , \quad \forall v_h \in x_h .\end{array}\ ] ] using the inequalities of and standard approximation results from it is straightforward to show the following approximation results of the interpolant \|h^{-\frac12 } ( u - r_h u ) \|_{\mathcal{f } } + \|h^{\frac12 } \nabla ( u - r_h u ) \cdot n_f \|_{\mathcal{f } } & \leq & c h^{t-1 } |u|_{h^t(\omega ) } \end{array}\ ] ] where . it will also be useful to bound the -norm of the interpolant by its values on the element faces . to this endwe prove a technical lemma .[ lem : invtrace ] for any function there holds it follows by norm equivalence of discrete spaces on the reference element and a scaling argument ( under the assumption of shape regularity ) that for all the claim follows by summing over the elements of and recalling that . for the analysis below we also need a quasi - interpolation operator that maps piecewise linear , nonconforming functions into the space of piecewise linear conforming functions .let ^d \cup v_h \mapsto v \cap v_h ] we may write the sum over the faces of the mesh , replacing by . the conclusion follows by taking absolute values on both sides and moving the absolute values under the integral sign resulting in the desired inequality .[ weak_cons_full ] let denote the solution to - with .then there holds | ~\mbox{d}s\end{gathered}\ ] ] for all ,[v_h , q_h ] ) \in \mathcal{v}\times \mathcal{w} ] , and for all there holds by integration by parts we have , using the orthogonality property on the faces of , and by definition [ lem : stab_press ] let then there holds \|_{\mathcal{f}_i } + \|h^{\frac12 } [ p_h]\|_{\mathcal{f}_i } \lesssim \|h^{\frac12 } n_f\,\cdot\ , [ \nabla u_h - \mathcal{i } p_h]\|_{\mathcal{f}_i } + \|\nabla \cdot u_h\|_\omega + \|h^{-\frac12 } [ u_h]\|_{\mathcal{f}_i}.\ ] ] let , denote the components of and define the tangential projection of the gradient matrix on the face by where denotes outer product . considering one face we have \|_{f}^2 = \|h^{\frac12 } n_f\,\cdot\ , [ \nabla u_h ] \|_{f}^2 + \|h^{\frac12}[p_h]\|_{f}^2 - 2 \int_f h_f n_f\,\cdot\ , [ \nabla u_h ] \cdot ( n_f\,\cdot\ , [ \mathcal{i } p_h ] ) \mbox{d}s.\ ] ] the integrand of the last term of the right hand side may be written \cdot ( n_f\,\cdot\ , [ \mathcal{i } p_h ] ) = [ p_h ] \sum_{i=1}^d \sum_{j=1}^d n_{f , i } n_{f , j } [ \partial_{x_j } u_{i}].\ ] ] by applying the following identity where denotes the trace of ,we may write \left(\sum_{i=1}^d \sum_{j=1}^d n_{f , i } n_{f , j } [ \partial_{x_j } u_i]\right ) = [ p_h]\left([\nabla \cdot u_h]-[\text{\rm{tr(}}t \nabla u_h)]\right).\ ] ] observe that since the tangential component of the gradient of the conforming approximation does not jump we have = [ \text{\rm{tr(}}t ( \nabla u_h - \nabla i_\text{cf } u_h)].\ ] ] collecting these identities we obtain \cdot n_f\,\cdot\ , [ \mathcal{i } p_h ] \mbox{d}s = \int_f h_f [ p_h]\bigl ( [ \nabla \cdot u_h]-[\text{\rm{tr(}}t \nabla ( u_h - i_\text{cf } u_h))]\bigr)\mbox{d}s \\\leq \|h^{\frac12 } [ p_h]\|_{f } c_i ( \|\nabla ( u_h - i_\text{cf } u_h)\|_{\delta_f } + \|\nabla \cdot u_h\|_{\delta_f}),\end{gathered}\ ] ] where denotes the union of the elements that have as common face . consequently \cdot n_f\,\cdot\ , [ \mathcal{i } p_h ] \mbox{d}s \leq \frac12 \|h^{\frac12 } [ p_h]\|_{f}^2 + c \|h^{-\frac12 } [ u_h]\|_{\mathcal{f}_{\delta_f}}^2 + c \|\nabla \cdot u_h\|^2_{\delta_f}.\ ] ] summing over we see that \|^2_{\mathcal{f}_i } + \frac12 \|h^{\frac12 } [ p_h]\|^2_{\mathcal{f}_i } \lesssim \|h^{\frac12 } n_f\,\cdot\ , [ \nabla u_h - \mathcal{i } p_h]\|^2_{\mathcal{f}_i } + \|\nabla \cdot u_h\|^2_\omega + c \|h^{-\frac12 } [ u_h]\|^2_{\mathcal{f}_i}\ ] ] which proves the claim .[ disc_poinc](discrete poincar inequality ) for all there holds \|_{\mathcal{f}_i } + \|h^{-\frac12 } [ u_h]\|_{\mathcal{f}_i } + \|u_h\|_{\omega}\ ] ] and \|_{\mathcal{f}_i}.\ ] ] for the first inequality use the poincar inequality for nonconforming finite elements and a triangle inequality then observe that for constant , implies that and therefore ( * ? ? ?* lemma b.63 ) using componentwise twice we then have \|_{\mathcal{f}_i}+ \|u_h\|_{\omega}.\ ] ] finally each component of is decomposed on the normal and tangential component on each face and we observe that using an elementwise trace inequality , \|_{\mathcal{f}_i } = \|h^{\frac12 } ( \mathcal{i } - n_f \otimes n_f ) [ \nabla ( u_h - i_\text{cf } u_h)]\|_{\mathcal{f}_i } \\\lesssim \|\nabla ( u_h - i_\text{cf } u_h)\|_h \lesssim \|h^{-\frac12 } [ u_h]\|_{\mathcal{f}_i}.\end{gathered}\ ] ] similarly for the proof of the second inequality observe that since ( redefining to act on a scalar variable , and once again by ( * ? ? ? * lemma b.63 ) ) there holds it then follows using an inverse inequality that \|_{\mathcal{f}_i}\ ] ] and the proof is complete .we will now focus on the formulation with .an immediate consequence of this choice is that any solution to the system must satisfy the issue of stability of the discrete formulation is crucial since we have no coercivity or inf sup stability of the continuous formulation to rely on . indeedhere the regularization plays an important part , since it defines a semi - norm on the discrete space .we introduce a mesh - dependent norm for the primal variable \|_{\mathcal{f}_i}+ \|h^{\frac12 } [ q_h]\|_{\mathcal{f}_i } + \gamma_m^{\frac12}\|v_h\|_{\omega } + \|h^{-\frac12 } [ v_h]\|_{\mathcal{f}_i},\ ] ] we will also use the following triple norm with control of both the dual pressure variabel and the dual velocities . since dirichlet boundary conditions are set weakly on , can be shown to be a norm on using lemmata [ lem : stab_press][disc_poinc ] .we now prove a fundamental stability result for the discretization .[ thm : infsup ] let , in . there exists a positive constant , that is independent of , but not of , or the local mesh geometry , such that for all there holds }{{|\mspace{-1mu}|\mspace{-1mu}|}(x_h , y_h ) { |\mspace{-1mu}|\mspace{-1mu}|}}\ ] ] first we observe that by testing with and we have \|^2_{\mathcal{f}_i}+\gamma_m \|u_h\|^2_\omega = \mathcal{g}[(u_h , z_h),(u_h ,-z_h ) ] .\ ] ] then observe that by integrating by parts in the bilinear form and using the zero mean value property of the approximation space we have \cdot\{w_h\ } ~\mbox{d}s.\ ] ] define the function such that for every face \vert_f.\ ] ] this is possible in the nonconforming finite element space since the degrees of freedom may be identified with the average value of the finite element function on an element face .using lemma [ lem : invtrace ] we have \|_f^2.\ ] ] testing with and we get \|_{\mathcal{f}_i}^2 = \mathcal{g}[(u_h , z_h),(0,y_h ) ] .\ ] ] by testing with , where and ^d ] be the solution of and the solution to , with and . then there holds first denote the discrete error .then by theorem [ thm : infsup ] }{{|\mspace{-1mu}|\mspace{-1mu}|}(x_h , y_h ) { |\mspace{-1mu}|\mspace{-1mu}|}}.\ ] ] then applying lemma [ weak_cons_full ] and [ lem : rhh1proj ] we have \leq \inf_{(\nu_h,\eta_h ) \in v_h \times w_h}\sum_{f \in \mathcal{f } } \int_{f } ~\mbox{d}s \\ -b_h(y_h , r_h u - u ) + s_{j,-1}(r_h u , v_h ) + \gamma_m(r_h u - u - \delta u , v_h)_{\omega}.\end{gathered}\ ] ] first note that \|_{\mathcal{f}_i } \leq c h \|u\|_{h^2(\omega ) } { |\mspace{-1mu}|\mspace{-1mu}|}(x_h,0){|\mspace{-1mu}|\mspace{-1mu}|}.\ ] ] finally , using a cauchy - schwarz inequality and a poincar inequality for for the perturbation we have collecting the above estimates ends the proof .[ thm : asymptotic ] assume that ^d ] . by constructionthe divergence of the -conforming part satisfies \|_{\mathcal{f}_i}+ \underbrace{\|\nabla \cdot u_h\|_h}_{=0}+ h \|u\|_{h^2(\omega)}\lesssim h \|u\|_{h^2(\omega)}\ ] ] and hence for . it remains to show that the weak limit is a weak solution of stokes equation . to this endconsider , with , \|_{\mathcal{f}_i } + h \|f\|_\omega ) \|w\|_{h^1(\omega)}\\ \lesssim h \|w\|_{h^1(\omega)}\end{gathered}\ ] ] we conclude by taking the limit .[ thm : error_est ] assume that ^d\times h^1(\omega) ] , where is defined by .we recall that \|_{\mathcal{f}_i } \leq c h \|u\|_{h^2(\omega)}\ ] ] so we only need to bound . also introduce .it follows that is a solution to the stokes equation on weak form with a particular right hand side .indeed we have for all ^d \times q ] . by the well - posedness of the problem we know that we know from equation , the fact that and proposition [ prop : stab_conv ] that \|_{\mathcal{f}_i } \lesssim h ] on every compact . we may then apply theorem [ thm:3sphere ] to and obtain these results may now be combined in the following way to prove the theorem .first by the triangle inequality , writing , by and there holds for the first term and using the discrete interpolation and proposition [ prop : stab_conv ] \|_{\mathcal{f}_i } \lesssim h + \|\delta u\|_\omega.\ ] ] for the last term , using , we have by the definition of and since by assumption here we applied proposition [ prop : stab_conv ] , , discrete interpolation , and applied to .finally by the triangle inequality , the a priori assumption , and the first claim of theorem [ thm : asymptotic ] we have the claim follows by collecting the bounds on the terms and applying the assumption on the perturbations in data versus the mesh - size .it is straightforward to prove the proposition [ prop : stab_conv ] and the theorems [ thm : asymptotic ] and [ thm : error_est ] also for and and thereby extending the analysis to include the method .we leave the details for the reader .one may also introduce perturbations in the right hand side .provided these perturbations are in ^d$ ] the same results holds .details on the necessary modifications can be found in .our numerical example is set in the unit square with zero right hand side and data given in the disc the flow is nonsymmetric with the exact solution given by we consider the formulation - , with for the parameters we chose , and , .first we perform the computation with unperturbed data .the results are presented in the left graphic of figure [ fig : exa2 ] .we report the velocity error both in the global -norm ( open square markers ) , the local -norm in the subdomain where ( filled square markers ) and in the residual quantities of ( circle markers , filled , open ) , \|_{\mathcal{f}_i}.\ ] ] the global pressure is plotted with triangle markers .the error plots for this case are given in figure [ fig : exa2 ] .we observe the convergence of the residual quantities .the global velocity and pressure -errors appears to have approximately convergence .the local error matches the result of theorem [ thm : error_est ] .indeed the dotted line is shows the behavior of the quantity illustrating the different components of the local error used in the proof of the theorem .we see that this quantity ( with a properly chosen constant ) gives a good fit with the local error .the same computation was repeated with a relative random perturbation of data .the results for this case is reported in the right plot of figure [ fig : exa2 ] . as predicted by theorythe results are stable under perturbation of data as long as the discretization error is larger than the random perturbation ( up to a constant ) .when the perturbations dominate the errors in all quantities appear to stagnate .-error against mesh - size , left unperturbed data , right with relative noise .reference lines are the same in both plots and of orders , dashed lines with different constants , dash dot and dotted ,title="fig:",height=226]-error against mesh - size , left unperturbed data , right with relative noise .reference lines are the same in both plots and of orders , dashed lines with different constants , dash dot and dotted ,title="fig:",height=226 ]
|
we consider a stabilized nonconforming finite element method for data assimilation in incompressible flow subject to the stokes equations . the method uses a primal dual structure that allows for the inclusion of nonstandard data . error estimates are obtained that are optimal compared to the conditional stability of the ill - posed data assimilation problem .
|
recent experimental developments in the fields of genomics , e.g. whole genome dna sequencing or proteomics , are opening possibilities for systems level studies in biology .in particular , the notion that biological functions may rely on a large number of interconnected variables ( for example genes ) working in concert has stimulated general theoretical interest about properties of biological networks .studies of the statistical properties of large ( typically thousands of nodes ) biological networks have identified a number of functional building block , termed network motifs , that occur more frequently than random .these findings support the idea that some systems are designed around a modular architecture , in which autonomous modules are wired together to generate versatile biological functions . while structural ( or topological ) properties are key for network characterization , functional properties are ultimately encoded in dynamical , or time - dependent changes in the state variables of the nodes .the sizes of systems that can be modeled dynamically are typically much smaller ( 10 - 100 nodes ) .one common modeling approach , for example for the yeast cell - cycle , is to simulate the nonlinear system of chemical rate equations describing the putative biochemical processes .modeling approaches have been applied to a number of systems , including the cell - cycle , the lambda - phage switch in e. coli .although these models provide a detailed description , this approach suffers from the caveat that most parameters are currently not accessible experimentally .in addition , the number of parameters is typically about five per reaction , resulting in a prohibitively large parameter space .this last point makes it difficult to grasp the full solution space of the model .recent approaches based on sampling the parameter space in optimal regions have been developed . at the opposite end of model complexity ,dynamical rules based on boolean state variables have been useful for studying more global dynamical properties of topological classes of networks .in addition , boolean models have been successfully applied to the yeast cell - cycle and the body patterning in drosophila embryos . in this study, we develop a systematic approach to describe how the dynamical landscape of small ( less than about 50 nodes ) boolean networks is affected by perturbations in the network connectivity .in particular , we consider the basin entropy , a quantity that considers the size distribution of the basins of attraction .we complement entropy with a measure of distance between the stable fixed points of a perturbed network and those in the unperturbed network .this combination gives a low - dimensional and compact representation of the patterns induced by a large number of perturbations .we illustrate our methods using the yeast cell - cycle network introduced in , and discuss examples of structural perturbations producing a range of modified basins of attraction .following a network of nodes can be represented by a adjacency matrix , in which an activating link between node and node is represented by and an inhibiting link by . the possibility of self - inhibitory ( or activating links ) is not excluded . in the boolean approximation ,each node has two possible states , so that the global state of all nodes can be represented by a vector , with when the node is _ on _ and if the node is _off_. the full phase space containing states is denoted by .a simple dynamical rule that characterizes the temporal evolution of the state variable can be defined following , which is closely related to update rules applied in perceptron models .if the network is in the state at time , the state at the next time - step is given by : s_i(t+1)=\ { ccc 1 & & _ j a_ijs_j(t)>0 + s_i(t)&&_j a_ijs_j(t)=0 + 0 & & _ j a_ijs_j(t)<0 + . for a given network , we apply this rule to every possible initial condition in .this defines orbits ( trajectories ) that must end in a limit cycle ( periodic attractor ) since we are dealing with a dynamical system on a finite space .a fixed point is a cycle of length one .accordingly , can be decomposed into a disjoint union of basins of attraction of size : . in a biological network, the attractors correspond to functional endpoints , and it is important that the states in the attractors are consistent with observed data .for example , by far the largest endpoint in the cell - cycle network of li et al .( see appendix ) corresponds to the stationary g1 phase in the cycle .other systems are more switch - like , for instance in signal transduction , where a cell might change its state from growth to differentiation according to an external trigger . to characterize these attractors , we introduce the following definitions : * we compute the _ number of attractors _ : an attractor is a limit cycle or a fixed point .an attractor has a basin of attraction which is the set of all initial conditions whose orbit converges to . *the _ basin entropy _ is defined as follows : let be the probability that an initial state belongs to basin . then , the entropy reads h:=-_k=1^k p_k(p_k ) is maximum ( ) if each state is its own basin of size one , and minimum ( ) when there is one single basin . is a natural measure for characterizing basin structures .because it takes into account the relative basin sizes , it is quite insensitive to appearance of small and biologically irrelevant basins . *the _ perturbation size _ measures the distance between attractors of a perturbed and a reference network : from every initial conditions , the hamming distance between the fixed points is computed , and the average over all initial conditions is taken .more precisely , if is the fixed point of the trajectory starting at and generated by the network g , then where is the hamming distance between two boolean states , namely + the value has the following interpretation : it is the average probability ( taken over all nodes ) that , for a random initial condition , the final state of a node differs . in this study, the reference network will be the cell - cycle network of li et al ., which has one very a large basin of attraction and several smaller ones .if some trajectories in the perturbed networks end in a limit cycle , is defined as the average of the hamming distance along the cycle . our goal is to assess how network dynamics is affected by several types of perturbations .we consider two classes : one which randomizes the adjacency matrix while keeping a number of topological characteristics from the original network invariant .the second class mimics biological perturbations , as would occur for example through mutations in the interaction partners that constitute the network links.the two classes are defined as follows : * _ shuffle _ ( class i ) : all activating and inhibiting arrows are cut in half and re - wired randomly .this ensure that the connectivity at each node is conserved . as compared to the li et al . study , we generate random networks that are more constrained , since the connectivity at each node is forced to remain unchanged after randomization .such perturbations are applied in the studies of network motifs . * _ remove _ ( class ii ) : the arrows are simply suppressed .we extend this class of perturbations beyond single link removal .we study the yeast cell - cycle network of li et al . ( the _ yeast cell - cycle network _ or _ ycc _ ) , in which a boolean model reproducing the different phases of the cycle is constructed ( see appendix ) .this model has a main fixed point attracting of the intial conditions .biologically this state corresponds to the g1 stationary phase of the cell - cycle , as reflected by the activities of the respective nodes .using computer simulations , the authors further showed that the cell - cycle dynamics had certain robustness properties when challenged with perturbations .in particular , it was shown that in a majority of cases , removal of one link or addition of a link at random did not change much the size of the largest basin of attraction .finally , the studied network had unusual trajectory channeling properties , when compared to random networks with equal number of nodes and links . herewe extend the characterization of this model by introducing a combination of measures to characterize the structure of basins of attraction as they are modified by structural perturbations .in particular we investigate the consequences of combined mutations and show that they can lead to cancellation effect .this type of perturbation allows to study the dynamical characteristics of a biological network in comparison with random networks belonging to a topological class .figure 1 shows the number of attractors ( ) and the entropy ( ) of the ycc and randomly shuffled ( class i ) versions thereof .the location of the reference network in the plane respective to the scatter of the perturbed networks allows us to asses how typical a network behaves with respect to a class .accordingly , the ycc is atypical , as seen by its marginal location in the lower left corner .indeed , this network has lower entropy and fewer basins than most networks , consistent with .the previous discussion shows how entropy characterizes the system of attractors .however , contains only information about the relative weights of the attractors , irrespective of their biological relevance .for example a perturbation can decrease the entropy while shifting the fixed point away from from that in the unperturbed , biologically relevant state .for this reason we introduced a second quantity , ( equation [ delta_def ] ) , a probabilistic measure of the change in the fixed point after perturbation .therefore , reflects the change in the biological relevance of the basin structure . after class ii perturbation ( removed arrows ) .colors represent different number of removed arrows : black for one removed arrows , red for 2 , green for 3 , turquoise for 4 and yellow for more than 4 .a : same figure as for the class i perturbation , the range of possible values is indicated by the dashed gray lines , the open blue circles represent the reference network .b : distribution of .c : vs. plot , the dashed gray line represents the entropy of the reference network.,title="fig:",width=151 ] after class ii perturbation ( removed arrows ) .colors represent different number of removed arrows : black for one removed arrows , red for 2 , green for 3 , turquoise for 4 and yellow for more than 4 .a : same figure as for the class i perturbation , the range of possible values is indicated by the dashed gray lines , the open blue circles represent the reference network .b : distribution of .c : vs. plot , the dashed gray line represents the entropy of the reference network.,title="fig:",width=143 ] after class ii perturbation ( removed arrows ) .colors represent different number of removed arrows : black for one removed arrows , red for 2 , green for 3 , turquoise for 4 and yellow for more than 4 .a : same figure as for the class i perturbation , the range of possible values is indicated by the dashed gray lines , the open blue circles represent the reference network .b : distribution of .c : vs. plot , the dashed gray line represents the entropy of the reference network.,title="fig:",width=340 ] [ fig_rmv ] we first repeat figure 1 for class ii perturbations which shows that networks with few perturbations cluster around the wild - type model ( figure 2a ) , while the sread for networks with four perturbations resembles the shuffled models ( figure 1 ) . turning to the measure of , we find that -distribution ( figure 2b ) is bimodal , showing two distinct populations of perturbations : ( and ) . in the second case, the perturbed model does not reproduce the biologically correct cell - cycle progression .but if is small , then the system of attractors of the perturbed network is still consistent with the biology and entropy allows to discriminate between networks with a larger or smaller main basin of attraction .for this reason , the entropy and are complementary for describing the dynamical landscape ( figure 2c ) .the two different modes in the -histogram are clearly reflected on this 2d representation .noticeably , the values span a broad range for any number of removed arrows , on the other hand higher entropies are more frequent for larger number ( ) of removed arrows .qualitatively , the spread of points in the plane conveys a measure of _robustness_. accordingly , the measure appears more fragile than the entropy property , especially when few arrows are removed .we now interpret the different locations in the plane : + 1 .if is large ( ) , the model does have attractor states which coincide with the gene activities of the different cell - cycles phases .such perturbations are specially interesting if the number of removed arrows is small ( dark colors ) .such links are then essential for the model , as their removal disrupts the cell - cycle very efficiently .+ 2 . if is small and the entropy increases , the probability that the dynamics ends in the reference attractor decreases demonstrating that the removed arrows contributed to the channeling properties of the system .if is small and the entropy decreases , the main attractor of the perturbed network has a stronger attraction property .some of these networks could be considered as alternative cell - cycle models .+ we illustrate these three regimes by examples : in the first example ( table ii ) , the dynamics has a large main basin of attraction like in the unperturbed model ( table i ) .however , the fixed point is significantly different from wild - type as the system is blocked in the a state of the m - phase and can not finish properly the cell - cycle ( see appendix for the recapitulation of the wild - type mode from ) . .basins of attraction with their respective probabilities in ( ) for the original ycc network .the largest basin ends at the g1 stationary state .entropy , number of attractors .[ cols="^,^,^,^,^,^,^,^,^,^,^ , < " , ] ccccccccccccc t & cln3 & mbf & sbf & cln1,2 & cdh1 & swi5 & c20,14 & clb5,6 & sic1 & clb1,2 & mcm1 & phase + + 1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & start + 2 & 0 & 1 & 1 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & g1 + 3 & 0 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & g1 + 4 & 0 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & g1 + 5 & 0 & 1 & 1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & s + 6 & 0 & 1 & 1 & 1 & 0 & 0 & 0 & 1 & 0 & 1 & 1 & g2 + 7 & 0 & 0 & 0 & 1 & 0 & 0 & 1 & 1 & 0 & 1 & 1 & m + 8 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 1 & 1 & m + 9 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 1 & 1 & 1 & m + 10 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 1 & 0 & 1 & m + 11 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 0 & 1 & 0 & 0 & m + 12 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 1 & 0 & 0 & g1 + 13 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & g1 * + hartwell , l. h. , hopfield , j. j. , leibler , s. , and murray , a. w. ( 1999 ) . from molecular to modular cell biology ,nature 402 , c47 - 52 .alm , e. , and arkin , a. p. ( 2003 ) .biological networks , curr opin struct biol 13 , 193 - 202 .oltvai , z. n. , and barabasi , a. l. ( 2002 ) .systems biology .life s complexity pyramid , science 298 , 763 - 4 .alon , u. ( 2003 ) .biological networks : the tinkerer as an engineer , science 301 , 1866 - 7 .barabasi , a. l. , and oltvai , z. n. ( 2004 ) .network biology : understanding the cell s functional organization , nat rev genet 5 , 101 - 13 .ihmels , j. , friedlander , g. , bergmann , s. , sarig , o. , ziv , y. , and barkai , n. ( 2002 ) .revealing modular organization in the yeast transcriptional network , nat genet 31 , 370 - 7 .arkin , a. , ross , j. , and mcadams , h. h. ( 1998 ) .stochastic kinetic analysis of developmental pathway bifurcation in phage lambda - infected escherichia coli cells , genetics 149 , 1633 - 48 . novak , b. , csikasz - nagy , a. , gyorffy , b. , chen , k. , and tyson , j. j. ( 1998 ). mathematical model of the fission yeast cell cycle with checkpoint controls at the g1/s , g2/m and metaphase / anaphase transitions , biophys chem 72 , 185 - 200 .cross , f. r. , archambault , v. , miller , m. , and klovstad , m. ( 2002 ) . testing a mathematical model of the yeast cell cycle , mol biol cell 13 , 52 - 70 .barkai , n. , and leibler , s. ( 2000 ) .circadian clocks limited by noise , nature 403 , 267 - 8 .vilar , j. m. , kueh , h. y. , barkai , n. , and leibler , s. ( 2002 ) . mechanisms of noise - resistance in genetic oscillators , proc natl acad sci u s a 99 , 5988 - 92 .epub 2002 apr 23 .leloup , j. c. , and goldbeter , a. ( 2003 ) . toward a detailed computational model for the mammalian circadian clock ,proc natl acad sci u s a 100 , 7051 - 6 .epub 2003 may 29 .von dassow , g. , meir , e. , munro , e. m. , and odell , g. m. ( 2000 ) .the segment polarity network is a robust developmental module , nature 406 , 188 - 92 .mello , b. a. , and tu , y. ( 2003 ) .quantitative modeling of sensitivity in bacterial chemotaxis : the role of coupling among different chemoreceptor species , proc natl acad sci u s a 100 , 8223 - 8 .epub 2003 jun 25 .mello , b. a. , and tu , y. ( 2003 ) .perfect and near - perfect adaptation in a model of bacterial chemotaxis , biophys j 84 , 2943 - 56 .hoffmann , a. , levchenko , a. , scott , m. l. , and baltimore , d. ( 2002 ) .the ikappab - nf - kappab signaling module : temporal control and selective gene activation , science 298 , 1241 - 5 .brown , k. s. , and sethna , j. p. ( 2003 ) .statistical mechanical approaches to models with many poorly known parameters , phys rev e stat nonlin soft matter phys 68 , 021904 .epub 2003 aug 12 .aldana , a. , coppersmith , s. , kadanoff , l.p .( 2002 ) , boolean dynamics with random couplings ( http://arxiv.org/abs/nlin/0204062 ) .kauffman , s. , peterson , c. , samuelsson , b. , and troein , c. ( 2003 ) .random boolean network models and the yeast transcriptional network , proc natl acad sci u s a 100 , 14796 - 9 .epub 2003 dec 1 .li , f. , long , t. , lu , y. , ouyang , q. , and tang , c. ( 2004 ) .the yeast cell - cycle network is robustly designed , proc natl acad sci u s a 101 , 4781 - 6 .epub 2004 mar 22 .albert , r. , and othmer , h. g. ( 2003 ) .the topology of the regulatory interactions predicts the expression pattern of the segment polarity genes in drosophila melanogaster , j theor biol 223 , 1 - 18 .chaves , m. , albert , r. , and sontag , e. d. ( 2005 ) .robustness and fragility of boolean models for genetic regulatory networks , j theor biol 235 , 431 - 449 .vazquez , a. , dobrin , r. , sergi , d. , eckmann , j .-p . , oltvai , z. n. , and barabasi , a .-the topological relationship between the large - scale attributes and local interaction patterns of complex networks , proc natl acad sci u s a 101 , 17940 - 5 .
|
we study the dynamics of gene activities in relatively small size biological networks ( up to a few tens of nodes ) , e.g. the activities of cell - cycle proteins during the mitotic cell - cycle progression . using the framework of deterministic discrete dynamical models , we characterize the dynamical modifications in response to structural perturbations in the network connectivities . in particular , we focus on how perturbations affect the set of fixed points and sizes of the basins of attraction . our approach uses two analytical measures : the basin entropy and the perturbation size , a quantity that reflects the distance between the set of fixed points of the perturbed network to that of the unperturbed network . applying our approach to the yeast - cell cycle network introduced by li _ et al . _ provides a low dimensional and informative fingerprint of network behavior under large classes of perturbations . we identify interactions that are crucial for proper network function , and also pinpoints functionally redundant network connections . selected perturbations exemplify the breadth of dynamical responses in this cell - cycle model .
|
the study of reversible photodegradation of dye - doped polymers began in the early 2000 s with the discovery of reversible photodegradation as measured with amplified spontaneous emission ( ase ) from disperse orange 11 ( do11 ) doped into ( poly)methyl - methacrylate ( pmma) . since then , reversible photodegradation has been found in air force 455 ( af455) , rhodamine b and pyrromethene , as well as several anthraquinone derivatives .several methods have been used to probe the effect including ase , absorption spectroscopy , fluorescence , digital imaging microscopy , and two - photon fluorescence . in an effort to expand our ability to measure and understand reversible photodegradation ,we have developed a white light interferometric microscope ( wlim ) , which combines the absorption spectrometer s frequency resolution with digital imaging s spatial resolution . along with measuring a sample s absorption spectrum , the wlim measures the change in the real part of the index of refraction due to photodegradation .a wlim utilizes a michelson interferometer and a ccd detector to obtain spatial and spectral resolution .the method of using a michelson interferometer with a photodiode detector to obtain spectral information in the visible spectrum is well known . using a point detector ,michelson interferometers have been used to measure the complex index of refraction in glass , gases , and liquids .further , it has been used with some modifications in astronomy research and the study of surfaces .the idea of using a ccd as a detector for interferometry was demonstrated previously by pisani _ et .; they used a fabry - perot interferometer , which measured only the imaginary part of the index of refraction .our wlim is designed to spatially and spectrally resolve the photodamage induced change in the complex index of refraction of dye - doped polymer thin films .the wlim ( see figure [ fig : setup ] ) consists of a thorlabs solid state light source ( hpls-30 - 03 ) , a michelson interferometer , and an edmunds optics monochrome ccd ( eo 0813 m ) .the michelson interferometer is made up of a uvfs uncoated non - polarizing cubic beam splitter and two uncoated uvfs mirrors , one of which is mounted on a thorlabs piezo stage ( nf5dp20s ) with a piezo translation range of m .the piezo stage is controlled by a closed feedback loop controller from thorlabs ( bpz001 ) , allowing for nanometer translation precision .photodegradation of samples is induced using an arkr cw laser focused with a cylindrical lens , with two polarizers providing intensity control .control and data acquisition by computer uses the labview 2011 full development system , and the data is analyzed using custom procedures in igor pro .a michelson interferometer easily produces interference fringes when using a monochromatic laser with a coherence length of several meters .the process is far more complicated when utilizing white light , which typically has a coherence length of the order of 40 - 60 m .the system is aligned first by optimizing the bulls - eye pattern from a hene laser beam that is collinear with the collimated white light .then , a differential micrometer is used to set the two arms of the interferometer within 1 mm of each other , at which point we scan the micrometer in intervals of approximately 30 m in order to find the zero path length difference , where a series of dark and light fringes form . the mirror alignment is subsequently adjusted so that the dark and light fringes change into a series of colorful fringes .after several iterations , a centered white light interference pattern forms .the ideal pattern is a bullseye shape with colorful fringes .since the optics have a minimum flatness of due to the beam splitter , the pattern produced is oval in shape .the wlim produces an intensity as a function of path length difference , , for each pixel of the camera , which can then be converted into the spectral intensity , , using a fourier transform , where is the wavenumber in vacuum .the result of the fourier transform is the interference intensity , which can be written as a function of the electric field in each arm of the interferometer as given an incident electric field amplitude , , the electric field in each arm can be written as where denotes the arm , is the spectral response of the optics in arm , and is the phase due to arm .assuming and are real quantities and substituting equation [ eqn : efield ] into equation [ eqn : iint ] we find the interference intensity to be where and denotes the complex conjugate . for the empty interferometer the zero path length difference phase , , in each arm can be written where is the balanced round trip arm length , and is a phase introduced by the optics and deviations from the plane - wave approximation . combining the phases , we find that for the empty interferometer the phase difference , , between the arms is given the high optical density of dye - doped polymer samples and their relatively large index of refraction ( ) , we must place nearly identical samples in each arm to maintain fringe contrast .`` nearly identical '' means the samples compositions are identical such that their complex index of refraction is the same , but their thickness and roughness may be different .the zero path length difference phase of each arm is and where is the sample thickness , is the glass substrate thickness , is the real wavenumber of the glass where we assume the imaginary portion is negligible , is the complex wavenumber of the dye - doped polymer , and is a phase factor introduced due to the samples not being perfectly flat and aligned ; comes from the empty interferometer phase .combining phases and separating into the real , , and imaginary , , parts we find , + 2(a_2-a_1)\left[k_g(k_0)-k_0\right ] \nonumber \\+ \psi_1(k_0)-\psi_2(k_0)+\phi_2(k_0)-\phi_1(k_0 ) , \label{eqn : rephase}\end{aligned}\ ] ] and using the definitions of the real and imaginary parts of , and where is the absorbance per unit length , and is the real part of the index of refraction , we rewrite equations [ eqn : rephase ] and [ eqn : imphase ] as + 2(a_2-a_1)k_0\left[n_g(k_0)-1\right ] \nonumber \\ + \psi_1(k_0)-\psi_2(k_0)+\phi_2(k_0)-\phi_1(k_0),\label{eqn : rephase2}\end{aligned}\ ] ] and while the assumption that only the sample thickness varies is a good approximation for fresh samples , this assumption is weakened when one sample is damaged . letting and the undamaged and damaged index of refraction , respectively , and denote the undamaged and damaged absorbance per unit length , respectively , and letting only the sample in arm 1 to be damaged , we can rewrite equations [ eqn : rephase2 ] and [ eqn : imphase2 ] as \nonumber \\+\psi_1(k_0)-\psi_2(k_0)+\phi_2(k_0)-\phi_1(k_0 ) , \label{eqn : phasedeg}\end{aligned}\ ] ] and the difference between the undamaged and damaged phases is , \label{eqn : dphase}\end{aligned}\ ] ] and .\end{aligned}\ ] ] the samples used in this study are spin coated thin films of 1,4-diamine-9,10-anthraquinone(1,4-daaq ) doped into pmma .samples are prepared as follows ; pmma and 1,4-daaq purchased from sigma - aldrich are dissolved into a solution of 33% -butyrolacetone and 67% propylene glycol methyl ether acetate . maintaining a ratio of 15% solids to 85% solvents ,we add the dye and polymer such that the concentration of dye in the polymer is 2.5g / l .the solution is then stirred using a magnetic stirrer for 24 hours to ensure the dye and polymer fully dissolve .afterwards the solution is filtered through a 0.2 m acrodisc filter to remove any remaining solids .the glass substrates are prepared for spin coating by submerging plain glass slides in acetone to remove any residues from manufacturing , then placing the cleaned slides in deionized water and finally drying and storing them in a lint - free container to minimize contamination .once the dye - polymer solution is prepared , it is placed on a substrate and then spin coated at 1100 rpm for 90s .the spin coated sample is then placed in an oven overnight to dry and to allow solvents to evaporate .once the sample has cooled , it is cut in half to form two nearly identical 2 cm 2 cm squares .when placing a sample in the interferometer for degradation , we use the two halves as a balanced pair in order to minimize differences between the sample arm and the attenuating arm of the interferometer .the experimental procedure is as follows .the white light source warms up for one hour before taking data to minimize fluctuations in the light source .next , a reference interferogram is produced using the empty interferometer and a translation step size of 20 nm , for a total number of 1000 steps . at each step ,the average of ten images is used in order to minimize noise from the ccd detector . once all the images are taken ,they are imported into igor pro , where a custom procedure finds the interferogram at each pixel and then takes the fft in order to find the phase and magnitude at each pixel .once the reference interferogram is taken , the sample and attenuator are mounted and the interferometer is realigned to compensate for wavefront distortion due to the samples not being perfectly homogeneous and flat .this adjustment is found to effect the measured phase but not the magnitude . given that we are typically only concerned with differences due to photodegradation , the absolute phase is unimportant .the pristine sample s interferogram is measured .subsequently , the sample is damaged using a pump laser with an average intensity of 60w/ for two hours , then another interferogram is taken .finally , the sample is removed and another reference interferogram is taken to ensure the probe light has not drifted from its original intensity . using the magnitude of the interferogram at each pixel without the samples , with the pristine sample , and after damaging , the absorbance before and after photodegradation is determined at each pixel . given that an individual pixel is noisy , an average over adjacent pixels is performed to find a binned absorbance value .the same averaging is applied to the phase data .since each pixel corresponds to a different point along the pump profile , spatial variations allow us to compare the change in absorbance and change in phase for different pump intensities .given that one of the primary goals of developing the wlim was to measure absorbance as a function of position and therefore intensity , the method is tested for each pixel by comparing the absorbance data for photodegradation measured with the wlim to results found with an ocean optics spectrometer .the spectrometer accommodates thicker samples because of its greater dynamic range .this difference in samples means the absolute numbers will be different between our experiments , but the spectral shapes remain the same .figure [ fig : specabs ] shows spectrometer data at three times during photodegradation corresponding to different pump doses .figure [ fig : wlimabs ] shows the sum of the absorbance in each interferometer arm , , as measured by the wlim for the initial undamaged sample as well as the absorbance after decay at the burn center and position of the burn . a 4-peak gaussian fit of the wlim data represents the absorption spectrum and confirms the wlim result is consistent with spectrometer results .point of the burn profile and at the center of the burn profile . ] for clarity , figure [ fig : wlimabs ] shows only the initial absorbance and two other points on the sample .absorbance measurements were taken at many points along the burn line and averaged .figure [ fig : spacecomp ] shows the absorbance at , the peak of the spectrum , as a function of pixel transverse to the line . also shown is a profile of the burn line as imaged with an imaging microscope .the gaussian fit of both the absorbance sum and the line profile yield the same gaussian widths within experimental uncertainty , showing the wlim can spatially resolve the burn line .one of the benefits of interferometry as compared to ordinary spectroscopy is the ability to simultaneously measure both the real and imaginary parts of the index of refraction .the real part of the index of refraction can be measured using the phase found from the fourier transform of an interferogram ..}\end{aligned}\ ] ] white light interferometric microscope can spatially resolve the change in the complex index of refraction due to photodegradation .the spatially resolved absorbance measurements during decay are found to be consistent with spectrometer measurements .we would like to thank wright patterson air force base and air force office of scientific research ( fa9550- 10 - 1 - 0286 ) for their continued support of this research .benjamin anderson , shiva k. ramini , and mark g. kuzyk .imaging studies of photodamage and self- healing of anthraquinone derivative dye doped polymers . in gregoryexarhos , editor , _ spie laser damage symposium proc ._ , number [ 8190 - 16 ] , boulder , co ,september 2011 .spie .shiva k. ramini , benjamin anderson , and mark g. kuzyk .recent progress in reversible photodegradation of disperse orange 11 when doped in pmma . in gregory exarhos , editor , _ spie laser damage symposium proc ._ , number [ 8190 - 18 ] , boulder co ,september 2011 .spie .f. yakuphanoglu and b.f .senkal . electrical conductivity ,photoconductivity , and optical properties of poly(1,4-diaminoanthraquinone ) organic semiconductor for optoelectronic applications . , 19:11931198 , 2008 .
|
we have developed a white light interferometric microscope ( wlim ) , which can spatially resolve the change in the complex index of refraction , and apply it to study reversible photodegradation of 1,4-diamino-9,10-anthraquinone doped into pmma . the measured change in absorbance is consistent with standard spectrometer measurements . we show the wlim can be used as a powerful tool to image the complex refractive index of a planar surface and to detect changes in a material s optical properties .
|
one of the characteristic traits of quantum mechanics is that not all possible states of a physical system are perfectly distinguishable .this is in stark contrast to the classical world , but enables us to solve cryptographic problems such as key distribution or two - party computation in the noisy - storage model .nevertheless , it is often possible to gain partial knowledge about the state .imagine a physical system is prepared in one out of several possible states chosen with a certain probability .the set of possible states , as well as the distribution are thereby known to us .the goal of _ state discrimination _ is to identify which state was chosen by performing a measurement on the system , whereby our aim is to choose measurements that maximize the average probability of success .this fundamental problem has been studied extensively for the past 30 years , starting with the works of helstrom , holevo and belavkin ( see for a survey of known result ) , and has found many applications in quantum information theory ( see e.g. , ) , cryptography , and algorithms . here, we consider a special twist to the standard state discrimination problem introduced in , in which we obtain additional information after the measurement that may help us to identify the state .this task is easily described in terms of the following game depicted in figure [ fig : game ] : imagine alice chooses a state from a finite set with probability , labeled by what we will call the _ string _ and the _ encoding _ .bob knows as well as the distribution .alice then sends the state to bob .bob may now perform any measurement from which he obtains a classical measurement outcome . afterwards , alice informs him about the encoding .the task of _ state discrimination with post - measurement information _ ( and no memory ) for bob is to identify the string , using the encoding and his classical measurement outcome , where we are again interested in maximizing bob s average probability of success over all measurements he may perform . in it was shown how bounds on this success probability can be used to prove security in the noisy - storage model . naturally , from a cryptographic standpoint it would be useful to know how much the additional information can actually help bob .let and be the maximum average probabilities of success for the problem of state discrimination with and without post - measurement information respectively .note that , since we can always choose to ignore any additional information .we will measure how useful post - measurement information is for bob in terms of the difference in his success probability of course , even in a classical setting post - measurement information can help bob determine the string . as a very simple example , suppose that is a single classical bit , and we have only one encoding .imagine that alice chooses and one of the two encodings uniformly at random and sends bob the bit .the states corresponding to this encoding are thus given by where .note that bob now has a randomly chosen bit in his possession and hence .however , he can decode correctly once he receives the additional information and thus , giving us .as has been shown in we always have in the classical world where all states are diagonal in the same basis and orthogonal for fixed .we first provide a general condition for checking the optimality of measurements for our task ( see section [ sec : optimality ] ) .it was shown in that the optimal measurments can be found numerically using semidefinite programming solvers , however in higher dimensions this remains prohibitively expensive .we then focus on the case which is particularly interesting for cryptography , namely when the string is chosen uniformly and independently from the encoding .first , we provide upper and lower bounds for the success probability ( section [ sec : lowerbound ] and [ sec : upperbound ] ) . in section [ sec : tightbounds ], we then show that for a large class of encodings ( so - called _ clifford encodings _ ) our lower bound is in fact tight .we thereby explicitely provide the optimal measurements for clifford encodings .the class of encodings we consider includes any encodings into two orthogonal pure states in dimension such as the well - known bb84 encodings , as well as the case where we have two possible strings and encodings which can be reduced to a problem in dimension .it was previously observed that for bb84 encodings post - measurement information was useless .here , we see that this is no mere accident , and give a general condition for when post - measurement information is useless for clifford encodings .we continue by showing that for clifford encodings , we can always perform a relabeling of the strings depending on the encoding such that we obtain a new problem for which post - measurement information is indeed useless .this is particularly appealing from a cryptographic perspective as it means the adversary can not gain any additional knowledge from the post - measurement information .this means that for clifford encodings we no longer need to treat the problem _ with _ post - measurement information any differently , and can instead apply the well - studied machinery of state discrimination .however , we will see that a relabeling that renders post - measurement information useless is impossible when considering a _ classical _ ensemble all commute . ] .in particular , we will see that as long as we are able to gain some information about the encoded string without waiting for the post - measurement information , then classically we can not hope to find a non - trivial relabeling that makes post - measurement information useless .we thereby focus on the case of encodings a single bit into two possible encodings in detail .curiously , we will show this by relating the problem to bell inequalities , such as for example the well - known chsh inequality .this suggests that the usefulness of post - measurement information forms another intriguing property that distinguishes the quantum from the classical world .before investigating the use of post - measurement information , we derive general conditions for the optimality of measurements for our task .we also provide a general bound on the success probability when the distribution over is uniform ( i.e. , ) and independent of the choice of encoding . when considering state discrimination with post - measurement information , we can without loss of generality assume that bob performs a measurement whose outcomes correspond to vectors where each entry corresponds to the answer that bob will give when he later learns which one of the possible encodings was used .that is , when the encoding was , bob will output the guess of the vector . in it was noted that the average probability that bob outputs the correct guess when given the post - measurement information maximized over all possible measurements ( povms ) can be computed by solving the following semidefinite program ( sdp ) .the primal of this sdp is given by maximize & + & , where by forming the lagrangian , we can easily compute the dual of this sdp ( see e.g. ( * ? ? ?* appendix a ) ) which is given by minimize & .sdps can be solved in polynomial time ( in the input size ) using standard algorithms , which also provide us with the optimal measurement operators .however , with the sdp formalism in mind , it is now also easy to provide necessary and sufficient conditions for when a set of measurement operators is in fact optimal .similar conditions were derived for the case of state discrimination _ without _ post - measurement information .a proof can be found in the appendix .[ lem : conditions ] a povm with operators is optimal for state discrimination with post - measurement information for the ensemble if and only if the following two conditions hold : 1 . is hermitian . for all .we now derive a simple upper bound on the success probability of state discrimination with post - measurement information when is a product distribution , and the string is chosen uniformly at random ( i.e., ) .we will use a trick employed by ogawa and nagaoka in the context of channel coding which was later rediscovered in the context of state discrimination .a proof can be found in the appendix .let be the number of possible strings , and suppose that the joint distribution over strings and encodings satisfies , where the distribution is arbitrary .then \ , \end{aligned}\ ] ] for all , where , and .note that the bound on the r.h.s contains very many terms , and yet our normalization factor is only .nevertheless , for many interesting examples we can obtain a useful bound this way , by choosing to be sufficiently large .similarly , if is chosen uniformly at random and independent of the encoding , we can find a lower bound to .the idea behind this lower bound is to subdivide the problem into a set of smaller problems which we can solve using standard techniques from state discrimination .note that without loss of generality , we can label the elements of that we wish to encode from , where we let .the vector can thus be written analogously as a vector .we now partition the set of all possible such vectors as follows .consider a shorter vector of length , that is , . with every such vector , we associate the partition note that and if we have .the union of all such partitions gives us the set of all possible vectors , that is , with every partition we can now associate a standard state discrimination problem _ without _ post - measurement information in which we try to discriminate states such that .that is , the set of states is given by and is the uniform distribution .note that the original problem of state discrimination where we do not receive any post - measurement information corresponds to the partition given by , where we always give the same answer no matter what the post - measurement information is going to be . as we show in the appendix [ lem : lowerbound ] the success probability _ with _ post - measurement information is at least as large as the success probability of a derived problem _ without _ post - measurement information ,i.e. , in particular , this allows us to apply any known lower bounds for the standard task of state discrimination to this problem .curiously , we will see that there exists a large class of problems for which this bound is tight , even though , that is , even though post - measurement information is useful .we now consider a very special class of problems called _ clifford encodings _ , for which we can determine the optimal measurement explicitly . in this problem, we will only ever encode a single bit chosen uniformly at random independent of the choice of encoding , and take dimensional states of the form where are generators of the clifford algebra , that is , anti - commuting operators for . ] satisfying for all .we also assume that the vector satisfies and . the distribution over encodings can be arbitrary .using the fact that the operators anti - commute , it is not hard to see that for and the latter condition then ensures that is a valid quantum state , that is , is positive semi - definite satisfying .the clifford algebra has a unique representation by hermitian matrices on qubits ( up to unitary equivalence ) which we fix henceforth .this representation can be obtained via the famous jordan - wigner transformation : for , where we use , and to denote the pauli matrices .we also use .note that in dimension , these operators are simply the pauli matrices , and and _ any _ encoding of the bit into two orthogonal pure states is of the above form .a simple example , is the bb84 encoding where we encode the bit into the computational basis labeled by and into the hadamard basis labeled by .furthermore , if we have only two possible strings and encodings , we can always reduce the problem to dimension . in higher dimensions ,encodings of the above form were suggested for the use in cryptographic protocols .we now first examine the setting of state discrimination _ without _ post - measurement information , which will provide us with the necessary intuition .again , we use to denote the number of possible encodings .recall the average state from for the vector , which tells us for every possible encoding which bit appears in the sum . we furthermore define the complementary vector , that is , . as a warmup ,suppose we are given and chosen uniformly at random and wish to determine which one . clearly , this is an example of state discrimination _ without _ post - measurement information , which can also be written as an sdp .the primal is of the form maximize & , + & + & .its dual is easily found to be minimize & , + & .analogous to lemma [ lem : conditions ] with one can derive optimality conditions which for the case of state discrimination were previously obtained in . in our casethey tell us that must be hermitian , and is a feasible dual solution .all we have to do is thus to guess an optimal measurement , and use these conditions to prove its optimality .consider the operators where is the normalized average vector note that since the generators of the clifford algebra anti - commute , we have that and .hence , these operators do form a valid measurement . in the appendix, we derive two lemmas which show that is hermitian ( lemma [ lem : qsum ] ) and satisfies for all ( lemma [ lem : qsum ] and [ lem : eigenvalues ] ) , where is the largest eigenvalue of . ] , which are the conditions we needed for optimality. all proofs can be found in the appendix .the measurements given in are optimal to discriminate from chosen with equal probability . we are now ready to determine the optimal measurements for the case _ with _ post - measurement information .first of all , recall from lemma [ lem : lowerbound ] that we can subdivide our problem into smaller parts by partitioning the set of strings . applied to the present case ,these partitions are simply given by where for simplicity we here use the vector itself to label the partition .note that by lemma [ lem : lowerbound ] we thus have that we show in the appendix that this bound is in fact tight .[ lem : tightbound ] for clifford encodings and post - measurement information is useless if and only if the maximum on the r.h.s .is attained by .note that the optimal measurement is thus given by for the vector maximizing the r.h.s of , and letting all other .this shows that for our class of problems the problem of finding the optimal measurement can be simplified considerably and is easily evaluated .it is a very useful consequence of our analysis that for any cryptographic application that makes use of such encodings , we can always perform a relabeling of states such that post - measurement information becomes useless .more precisely , we will associate with the new all vector and with the new vector .that is , for the optimal vector we let clearly , by lemma [ lem : tightbound ] we then have for that as desired .we now consider a small example that illustrates how our statement applies to the case where we have only two possible encodings into two orthogonal pure states in dimension , and we choose the encoding uniformly at random ( ) .a simple example is encoding into the bb84 bases , where we pick the computational basis for and the hadamard basis for .we now show that in two dimensions , post - measurement information is useless if and only if the angle between the bloch vectors for the states and obeys as illustrated in figures [ fig : piuseless ] and [ fig : piuseful ] . .the dashed line corresponds to the bloch vector of the optimal measurement using post - measurement information consisting of two rank one projectors and , which is the same measurement one would make for standard state discrimination .we output the same bit , no matter what encoding information we receive . ] .the dashed line corresponds to the bloch vector of the optimal measurement using post - measurement information consisting of two rank one projectors and , which is the measurement one would make in standard state discrimination , if we were to distinguish from .which bit we output depends on the post - measurement information we receive . ]note that in this example the average states are given by the two partitions we are considering are and .let and be the bloch vectors corresponding to the states and respectively .we have from lemma [ lem : eigenvalues ] that \frac{1}{2}\left({\mathbb{i}}+ \|v_0 - v_1\|_2\right ) & \mbox { for } \vec{x } = ( 0,1)\ .\end{array } \right.\end{aligned}\ ] ] hence , by lemma [ lem : tightbound ] post - measurement information is useless if and only if since for pure states , we have and and thus holds if and only if . the optimal measurement is again given by . note that this is rather intuitive , since for partition we always give the same answer , no matter what post - measurement information we receive .we saw above that for the case of clifford encodings even if post - measurement information was useful for the original problem , that is , , we could always perform a relabeling to obtain a new problem for which post - measurement information is useless .we now show that this is a unique quantum feature , and is not present in analogous classical problems as long as we are able to gain some information even without post - measurement information , i.e. , .we thereby call a problem _ classical _ if and only if all states commute .we again focus on the case where we wish to encode a single bit .let be a projector onto the support of . for simplicity, we will assume in the following that for all encodings , and that the projectors are of equal rank .we also assume that .it is straightforward to extend our argument to a more general case , but makes it more difficult to follow our idea . in (* lemma 5.1 ) it was shown that if = 0 ] is the probability that they return answers and given questions and , and the maximization is over all states and measurements allowed in a particular theory .classically , we have in a quantum world , however , alice and bob can achieve more general non - local games are of course possible , where we may have a larger number of questions and answers , and the rules of the game may be more complicated . of central importance to us will be the fact that if alice s ( or bob s ) measurements commute , then there exists a classical strategy that achieves the same winning probability ( see e.g. ) .we now use this fact to prove our result . to explain the main idea behind our construction, we focus on the case where we only have two possible encoding .that is , and .we also assume that the bit , as well as the encoding is chosen uniformly and independently at random .the states defining our problem are thus , , and .we again consider the two partitions labeled by given by as before , we can associate a standard state discrimination problem with each of these partitions . for the first partition as wish to discriminate between the states and specified by and where we are given one of the two states with equal probability .let denote the success probability of solving this problem , maximized over all possible measurements .note that our condition of being able to gain some information in the state discrimination problem corresponds to having for the second partition , we wish to discriminate between and from and , again given with equal probability .let denote the corresponding success probability for the second partition .note that since we have only two possible partitions here constructed in the way outlined in section [ sec : tightbounds ] , our goal of showing that there exists no relabeling that makes post - measurement information useless can be rephrased as showing that .we now show that these two state discrimination problems arise naturally in the chsh game .in particular , we show in the appendix that there exists a strategy for alice and bob to succeed at the chsh game with probability , where alice s measurements are given by the projectors and . however , recall that if the ensemble of states is classical the projectors all commute , and hence there exists a classical strategy for alice and bob that also achieves a winning probability of .hence , by we must have using this implies , and hence the relabelling corresponding to the second partition can not make post - measurement information useless . to summarize weobtain that is called non - trivial . ] for the case of two encodings of a single bit chosen uniformly at random ( i.e. , ) , which do allow us to gain some information even without post - measurement information ( ) , there exists no non - trivial relabeling that renders post - measurement information useless .note that if we are able to gain some information in both state discrimination problems , i.e. , the preceding discussion also implies that , that is , post - measurement information is never useless .bounds on bell inequalities corresponding to bounds on the maximum winning probability that can be achieved in a classical world can thus allow us to place bounds on how well we can solve state discrimination problems _ without _ post - measurement information .this is in stark contrast to the quantum setting .for example , for the bb84 encodings it is not hard to see that , and hence post - measurement information is always useless . yet , there exist classical encodings for which but .to analyze the case of multiple encodings , we have to consider more complicated games than the one obtained from the chsh inequality .a natural choice is to consider games in which bob has to solve different state discrimination problems corresponding to different partitions of the vectors depending on his question in the game . to make a fully general statementwe would like to include all possible partitions .clearly , however the above approach can also be used to place bounds on the average of success probabilities for a subset of partitions by defining a game with less questions , and evaluating it s maximum classical winning probability .our work raises several immediate open questions .first of all , can we obtain sharper bounds ?since solving an sdp numerically is still very expensive in higher dimensions , it would also be interesting to prove bounds on how well generic measurements such as the square - root measurement ( also known as the pretty good measurement ) perform .the pretty good measurement is a special case of belavkin s weighted measurements , which was already used in its cube weighted form in to provide bounds on the state discrimination with post - measurement information . such bounds have most recently been shown by tyson for standard state discrimination . yet , no good bounds are known on how well such measurements perform for our task .more generally , it would be very interesting to see whether one can adapt the iterative procedures investigated in to find optimal measurements for the case of standard state discrimination without post - measurement information to this setting .concerning such iterative procedures , we would like to draw special attention to the recent work by tyson generalizing monotonicity results for such iterates , which could be applied here .naturally , it would be very interesting to know if our results for clifford encodings can be extended to a more general setting .our discussion of classical ensembles shows that there exist problems for which no matter what relabeling we perform , and hence we can not hope that a similar statement holds in general .nevertheless , it would be interesting to obtain necessary and sufficient conditions for when post - measurement is already useless , or otherwise can be made useless by performing a relabeling .dg thanks john preskill and caltech for a summer undergraduate research fellowship .sw thanks robin blume - kohout and sarah croke for interesting discussions .sw is supported by nsf grants phy-04056720 and phy-0803371 .38 natexlab#1#1bibnamefont # 1#1bibfnamefont # 1#1citenamefont # 1#1url # 1`#1`urlprefix[2]#2 [ 2][]#2 , in _ _( ) , pp . . , * * , ( ) . , , ( ) , . , , , * * , ( pages ) ( ) , http://link.aps.org/abstract/prl/v100/e220502. , * * , ( ) . , * * , ( ) , . , * * , ( ) . , * * , ( ) . , * * , ( ) . , , , , * * , ( ) . , * * , ( ) . , * * , ( ) . , , , * * , ( ) . , * * , ( ) . , * * , ( ) . , , ,, * * , ( ) . , ph.d . thesis , ( ) , . , _ _ ( , ) . , * * , ( ) . , * * , ( ) . , * * ( ) . , * * , ( ) , ., , , * * ( ) . , * * , ( ) .( ) , . , * * , ( ) . , * * , ( ) . , * * , ( ) . , * * , ( ) . , * * , ( ) . ,* * , ( ) . , * * , ( ) . , , , * * , ( ) . , , , * * , ( ) . , * * , ( ) .( ) , . , ph.d. thesis , ( ) . , _ _ ( , ) . in this appendix , we provide the technical details of our claims . for ease of reading , we thereby provide the proofs together with the statement of the lemmas .a povm with operators is optimal for state discrimination with post - measurement information for the ensemble if and only if the following two conditions hold : 1. is hermitian . for all .suppose first that the two conditions hold .note that condition ( 2 ) tells us that is a feasible solution , that is , it satisfies all constraints for the dual sdp . by weak duality of sdpswe thus have , and from condition ( 1 ) we also have that .hence the povm forms an optimal solution for the sdp .conversely , suppose that is an optimal solution for the primal sdp .let be the optimal solution for the dual sdp .note that this means that already satisfies condition ( 2 ) , and all that remains is to show that has the desired form given by condition ( 1 ) . since is a feasible solution for the primal sdp , we have by slater s condition that the optimal values and are equal , i.e. , . using the fact that and that the trace is cyclic we thus have since ( equivalently . ) , and for all we have that all the terms in the sum are positive and hence we must have for all that . again using the fact that the two operators are positive semidefinite , and the cyclicity of the trace we thus have for the optimal solution that summing the l.h.s . over all and noting that then gives us condition ( 1 ) .let be the number of possible strings , and suppose that the joint distribution over strings and encodings satisfies , where the distribution is arbitrary .then \ , \end{aligned}\ ] ] for all , where , and .note that since is operator monotone for ( * ? ? ?* theorem v.1.9 ) we have using the fact that we hence obtain \\ & = \frac{1}{n } { \mathop{\mathsf{tr}}\nolimits}\left[\left(\sum_{\vec{x } } \rho_{\vec{x}}^{\alpha}\right)^{\frac{1}{\alpha}}\right]\ , \end{aligned}\ ] ] as promised . the success probability _ with _ post - measurement information is at least as large as the success probability of a derived problem _ without _ post - measurement information , i.e. , this follows immediately from the discussion by noting that now show that our claim for . our goal is to evaluate where the maximization is taken over all states . using the fact that the set of operators forms an orthonormal ( with respect to the hilbert - schmidt inner product ) basis for the hermitian matrices we can write since for , and we can rewrite this gives us where and denotes the euclidean inner product . since if and only if we have that the maximum in is attained for with which gives our claim .the argument for is analogous .let be the string that achieves the optimum on the r.h.s of .we now claim that is an optimal solution to the sdp for the problem of state discrimination _ with _ post - measurement information .first of all , note that lemma [ lem : qsum ] gives us that is hermitian .we then have by lemma [ lem : eigenvalues ] that _ for all _ possible .our claim now follows from lemma [ lem : conditions ] , and by noting that for the partition we will always give the same answer , no matter what post - measurement information we receive later on .
|
we consider a special form of state discrimination in which after the measurement we are given additional information that may help us identify the state . this task plays a central role in the analysis of quantum cryptographic protocols in the noisy - storage model , where the identity of the state corresponds to a certain bit string , and the additional information is typically a choice of encoding that is initially unknown to the cheating party . we first provide simple optimality conditions for measurements for any such problem , and show upper and lower bounds on the success probability . for a certain class of problems , we furthermore provide tight bounds on how useful post - measurement information can be . in particular , we show that for this class finding the optimal measurement for the task of state discrimination _ with _ post - measurement information does in fact reduce to solving a different problem of state discrimination _ without _ such information . however , we show that for the corresponding _ classical _ state discrimination problems with post - measurement information such a reduction is impossible , by relating the success probability to the violation of bell inequalities . this suggests the usefulness of post - measurement information as another feature that distinguishes the classical from a quantum world .
|
[ sec:1 ] the kriging model based on the stationary random process with the estimation statistics with an unknown constant mean and variance and some correlation function for the asymptotic solution ^ 2\ } = \sigma^2\ ] ] has the well - known ( co - ordinate independent ) least - squares disjunction .the aim of the paper is to find ( on computer ) a co - ordinate dependent disjunction of kriging model for non - asymptotic solution ^ 2\ } = \sigma^2 \ .\ ] ]following we get since the correlation function estimator must be non - increasing then only non - decreasing outcomes of the experimental semi - variogram should be taking into consideration for since then and the correlation function estimator is also bounded by us consider a set of frozen in time stationary random processes since from the central limit theorem holds {t = n+1}-e\{v_j\}_{t = n+1 } } { \sqrt{e\{v_j^2\}_{t = n+1}-e^2\{v_j\}_{t = n+1}}}\sqrt{n } \rightarrow n(0;1 ) \nonumber \\ & \vdots & \nonumber \\u_{t = n+k } & = & \frac{\left[\frac{1}{n}\sum v_j\right]_{t = n+k}-e\{v_j\}_{t = n+k } } { \sqrt{e\{v_j^2\}_{t = n+k}-e^2\{v_j\}_{t = n+k}}}\sqrt{n } \rightarrow n(0;1 ) \nonumber\end{aligned}\ ] ] an independent set of theirs outcomes \ ] ] for also must follow the standard normal distribution .let the values on the latter asymmetric side be supposed to be response averages $ ] of frozen in time stationary random processes based on with the correlation function ^ 2 } , & \qquad \mbox{for}~~\delta_{ij}>0,\\ + 1 , & \qquad \mbox{for}~~\delta_{ij}= 0,\\ \end{array } \right.\ ] ] with unknown mean and variance solving on computer the least - squares constraint in one unknown for every ^ 2\ } = \sigma^2 \ \right]_t \ , \ ] ] substituting the solution into \ ] ] we get , for every , the significance level for computer k - s test
|
the aim of the paper is to derive the numerical least - squares estimator for mean and variance of random variable . in order to do so the following questions have to be answered : ( i ) what is the statistical model for the estimation procedure ? ( ii ) what are the properties of the estimator , like optimality ( in which class ) or asymptotic properties ? ( iii ) how does the estimator work in practice , how compared to competing estimators ?
|
in this paper we turn upon the seminal harris and todaro work , which together with todaro is considered one of the starting points of the classic rural - urban migration theory . the hypothesis and predictions of harris - todaro modelhave been subjected to econometric evaluation and have been corroborated by several studies .the key hypothesis of harris and todaro are that migrants react mainly to economic incentives , earnings differentials , and the probability of getting a job at the destination have influence on the migraton decision . in other words , these authors posit that rural - urban migration will occur while the urban expected wage exceed the rural wage . from this _ crucial assumption _ , as denominated by harris - todaro , is deduced that the migratory dynamics leads the economic system toward an equilibrium with urban concentration and high urban unemployment . in our previous works we analyzed the rural - urban migration by means of an agent - based computational model taking into account the influence of the neighborhood in the migration decision .the inclusion of the influence of neighbors was done via an ising like model .the economic analogous to the external field in the ising hamiltonian was the differential of expected wages between urban and rural sectors .therefore , in theses works the crucial assumption of harris and todaro were taken for granted .now , we are motivated by the following question : can the crucial assumption and equilibrium with urban concentration and urban unemployment obtained from the original harris - todaro model be generated as emergent properties from the interaction among adaptative agents ? in order to answer this question we implemented an agent - based computational model in which workers grope for best sectorial location over time in terms of earnings .the economic system simulated is characterized by the assumption originally made by harris and todaro .the paper is arranged as follows .section [ haristodaromodel ] describes the analytical harris - todaro model showing its basic equilibrium properties . in section [ harristodaroagentbasedmodel ]we present the implementation of the computational model via an agent - based simulation and compare its aggregate regularities with the analytical equilibrium properties .section [ conclusion ] shows concluding remarks .harris and todaro studied the migration of workers in a two - sector economic system , namely , rural sector and urban sector .the difference between these sectors are the type of goods produced , the technology of production and the process of wage determination .the rural sector is specialized in the production of agricultural goods .the productive process of this sector can be described by a cobb - douglas production function : where is the production level of the agricultural good , is the amount of workers used in the agricultural production , and are parametric constants .similarly , the urban sector also has its productive process described as cobb - douglas production function : , with and . except where it is indicated , the results presented in this section are valid for this general case . the cobb - douglas form is a standard assumption about technology . ] where is the production level of the manufactured good , is the quantity of workers employed in the production of manufactured goods , and are parametric constants .both goods and labor markets are perfectly competitive .nevertheless , there is segmentation in the labor market due to a high minimum urban wage politically determined . in the rural sector ,the real wage , perfectly flexible , is equal to the marginal productivity of labor in this sector : ) , with respect to multiplied by . ] where is the real wage and is the price of the agricultural good , both expressed in units of manufactured good . in the urban sector , a minimum wage , , is assumed fixed institutionally at a level above equilibrium in this labor market .it can be formalized as ) , with respect to . ] where is the amount of workers in the urban sector .the relative price of the agricultural good in terms of the manufactured good , , varies according to the relative scarcity between agricultural and manufacturated goods . then , denotes a function in their work not a constant value as used by us . ] where and are a parametric constants . is the elasticity of with respect to the ratio .the overall population of workers in the economy is , which is kept constant during the whole period of analysis . by assumptionthere are only two sectors and rural prices are wholly flexible , which implies that there is full employment in the rural area , i.e. , all workers living at the rural sector are employed at any period . then at any periodthe following equality is verified : given a parametric constant vector , an initial urban population , and a minimum wage one can calculate the temporary equilibrium of the economic system by using eqs .( [ ya]-[nanu ] ) . from eq .( [ wm ] ) one can find the employment level at the manufacturing sector replacing eq .( [ nm ] ) in eq .( [ ym ] ) we get the production level of the manufacturing sector from eq .( [ nanu ] ) one can obtain the relation which is used with eq .( [ ya ] ) to obtain the agricultural production by using eqs .( [ p ] ) , ( [ ymfunc ] ) and ( [ yafunc ] ) the terms of trade are determined ^\gamma . \label{pfunc}\ ] ] finally , by using eqs .( [ wa ] ) , ( [ na ] ) and ( [ pfunc ] ) , the rural wage in units of manufacturated good is obtained in sum , the vector configures a temporary equilibrium that might be altered whether occurs a migration of workers , induced by the differential of sectorial wages , which changes the sectorial distribution of overall population .harris and todaro , in determining the long run equilibrium , i.e. , the absence of a net rural - urban migratory flow , argue that the rural workers , in their decision on migrating to the urban area , estimate the expected urban wage , , defined as : the ratio , which is the employment rate , is an estimative of the probability that a worker living at urban sector gets a job in this sector .as mentioned before , the key assumption of the model of harris and todaro is that there will be a migratory flow from the rural to the urban sector while the expected urban wage is higher than the rural wage .thus , the long run equilibrium is attained when the urban worker population reaches a level such that the expected urban wage equates the rural wage : this equality is known in the economic literature as the _ harris - todaro condition_. harris and todaro argue that the differential of expected wages in eq .( [ wuwa ] ) can be a constant value . when this differential reaches , the net migration ceases .generalized harris - todaro condition _ can be expressed as follows : the level of the urban population that satisfies the eq .( [ wuwadelta ] ) , i.e. , the equilibrium urban share , is determined from the solution of the equation resulting from substitution of equations ( [ wafunc ] ) , ( [ wu ] ) in eq .( [ wuwadelta ] ) : the solution of eq .( [ geneq ] ) is parametrized by the vector .harris and todaro , in order to evaluate the stability of the long run equilibrium , postulate a mechanism of adjustment that is based on the following function of sign preservation : the differential equation that governs the state transition in the model of harris and todaro is obtained by replacing equations ( [ wafunc ] ) , ( [ wu ] ) in eq .( [ dndt ] ) . based on this postulated adjustment process , harris and todaro show that the long run equilibrium is globally asymptotically stable .this means that the economy would tend to long run equilibrium with unemployment in the urban sector generated by the presence of a relatively high minimum wage for all possible initial conditions . from now on we will refer to the long run equilibrium simply as equilibrium . ) for different values of .squares : urban share ; circles : urban unemployment rate .fixed parameters used are , , , , and .,width=283 ] based on the numerical solutions of eq .( [ geneq ] ) one can evaluate the impact that the variation of the minimum wage and the elasticity of the terms of trade on the equilibrium . in figure [ nuwmn ]we see that under the hypothesis of a cobb - douglas technology , the equilibrium urban share , , does not depend on the minimum wage .however , changes in the value of reduces the labor demand on the manufacturing sector what results in higher unemployment rates in the equilibrium . ) for different values of .squares : urban share ; circles : urban unemployment rate , .fixed parameters used are , , , , and , width=283 ] in turn , as seen in figure [ nugamman ] , changes in the elasticity of the terms of trade alter slightly the equilibrium urban share and unemployment rate . a net migration toward urban sector shift the terms of trade to higher values. the greater the greater this shift , what cause an increase in the rural wage in units of manufacturing good , becoming the urban sector less attractive .in this section we describe the implementation of the computational model we proposed , as well as the aggregate patterns obtained numerically and the comparison with the respective analytical results .initially , workers are randomly placed in a square lattice with linear dimension .the reference values of the parameters used for these simulations are the same done to evaluate the equilibrium of the harris - todaro model , namely , , , , , and .the value of the minimum wage used is and the initial urban fraction of the total population is , where is the normalized urban population also called urban share .the initial value is in agreement with historical data of developing economies . given these parameters , one can calculate the vector which characterizes temporary equilibrium of the system by using eqs .( [ nm]-[wafunc ] ) . by using eq .( [ nm ] ) , the employment level of the urban sector , , is obtained . if all workers in the urban sector are employed and each individual earns the wage given by the manufacturing marginal productivity , .otherwise , there will be a fraction of workers employed , which earn the minimum wage , , and workers unemployed , which earn a wage .each worker can be selected to review his sectorial location with probability , called activity .therefore , in each time step only a fraction of workers becomes potential migrants , going through the sectorial location reviewing process .potential migrants will determine their satisfaction level of being in the current sector by comparing their earnings , , among nearest neighbors .the potential migrant starts the comparison process with a initial satisfaction level .when the satisfaction level is added in one unit ; if , is diminished in one unit ; if , does not change .after the worker has passed through the reviewing process his / her satisfaction level is checked .the migration will occur only if , what means that the worker s earnings is less than the most of his / her nearest neighbors .after all the potential migrants complete the reviewing process and have decided migrate or not , a new configuration of the system is set . therefore , once again a new temporary equilibrium of the system is calculated by using eqs .( [ ymfunc]-[wafunc ] ) .the whole procedure is repeated until a pre - set number of steps is reached .it is important to emphasize that is kept constant throughout the simulation .its given by eq .( [ nm ] ) which depends on the technological parameters , , and the minimum wage , , which are constants too . in this sectionwe develop the analysis of the long run aggregate regularities of harris - todaro agent - based computational model .these long run properties will be compared between the solution of the analytical model and simulations we ran .as function of simulation steps .from top to bottom the initial urban shares are .,width=283 ] figures [ nut ] , [ unempt ] and [ wu - wa ] show the basic characteristics of the transitional dynamics and long run equilibrium generated by simulations .when the economic system has a low initial urban share , or , there is a net migration toward urban sector .this migration takes the urban sector from a full employment situation to an unemployment one .the positive differential of expected wages that pulls workers to the urban sector diminishes .however , if the economic system initiates with a high urban share , , or there is net flow of migration toward rural sector in such a way that the unemployment rate of the urban sector decreases . in this case, the differential of expected wages is negative . as function of simulation steps . from topto bottom the initial urban shares are .,width=283 ] in an economy mainly rural ( ) , the transitional dynamics characterized by a continuous growth of population of the urban sector with a differential of expected wages relatively high is followed by the stabilization of rural - urban differential of expected wages . in other words , the generalized harris - todaro condition , eq .( [ wuwadelta ] ) , arises as a long run equilibrium result of the agent - based migratory dynamics .figure [ nut ] also shows that even after the urban share has reached an stable average value , there are small fluctuations around this average .therefore , differently from the original harris - todaro model , our computational model shows in the long run equilibrium the reverse migration .this phenomenon has been observed in several developing countries as remarked in ref . . as function of simulation steps . from topto bottom the initial urban shares are .,width=283 ] in figures [ wmalphanu ] , [ wmalphawuwa ] and [ wmalphaunemp ] one can see that for a given value of , the variation of practically does not change the equilibrium values of the urban share , the differential of expected wages and the unemployment rate .however , for a given , higher values of make the urban sector less attractive due the reduction of the employment level .this causes a lower equilibrium urban share , a higher unemployment rate and a gap in the convergence of the expected wages . as function of the technological parameter and the minimum wage .white area is not a valid combination of parameters.,width=321 ] in figures [ wmgammanu ] , [ wmgammawuwa ] and [ wmgammaunemp ] can be seen that for a fixed value of , the equilibrium values of the urban share , the differential of expected wages and unemployment rate do not have a strong dependence with .however , variations in for a fixed , dramatically change the equilibrium values of the variable mentioned before .higher values of generate a lower urban concentration , a higher gap in the expected wages and a higher unemployment rate in the equilibrium . and the minimum wage .white area is not a valid combination of parameters.,width=321 ] as function of the technological parameter and the minimum wage .white area is not a valid combination of parameters.,width=321 ] finally , in figure [ alphaphinu ] is shown that the convergence of migratory dynamics for a urban share , compatible with historical data , is robust in relation to the variation of the key technological parameters , and . as function of the parameter and the minimum wage .white area is not a valid combination of parameters.,width=321 ] as function of the parameter and the minimum wage .white area is not a valid combination of parameters.,width=321 ] as function of the parameter and the minimum wage .white area is not a valid combination of parameters.,width=321 ] as function of the technological parameters and .white area is not a valid combination of parameters.,width=321 ] as function as function of the technological parameters and .white area is not a valid combination of parameters.,width=321 ] as function of the technological parameters and .white area is not a valid combination of parameters.,width=321 ]in this paper we developed and agent - based computational model which formalizes the rural - urban allocation of workers as a process of social learning by imitation .we analyze a two - sectorial economy composed by adaptative agents , i.e. , individuals that grope over time for best sectorial location in terms of earnings .this search is a process of imitation of successful neighbor agents .the dispersed and non - coordinated individual migration decisions , made based on local information , generate aggregate regularities .firstly , the _ crucial assumption _ of harris and todaro , the principle that rural - urban migration will occur while the urban expected wage exceed the rural wage , comes out as spontaneous upshot of interaction among adaptative agents . secondly , the migratory dynamics generated by agents that seek to adaptate to the economic environment that they co - create leads the economy toward a long run equilibrium characterized by urban concentration with urban unemployment .when this long run equilibrium is reached , the generalized harris - todaro condition is satisfied , i.e. , there is a stabilization of the rural - urban expected wage differential .thirdly , the impact of the minimum wage and elasticity of terms of trade in a long run equilibrium obtained by simulations are in agreement with the predictions of the original harris - todaro model with cobb - douglas technology .finally , the simulations showed an aggregated pattern not found in the original harris - todaro model .there is the possibility of small fluctuations of the urban share around an average value .this phenomenon is known as reverse migration .aquino l. espndola thanks capes for the financial support .jaylson j. silveira acknowledges research grants from cnpq .t. j. p. penna thanks cnpq for the fellowship .
|
the harris - todaro model of the rural - urban migration process is revisited under an agent - based approach . the migration of the workers is interpreted as a process of social learning by imitation , formalized by a computational model . by simulating this model , we observe a transitional dynamics with continuous growth of the urban fraction of overall population toward an equilibrium . such an equilibrium is characterized by stabilization of rural - urban expected wages differential ( generalized harris - todaro equilibrium condition ) , urban concentration and urban unemployment . these classic results obtained originally by harris and todaro are emergent properties of our model .
|
among the objects listed in the norad catalog , about have highly elliptical orbits ( heo ) with an eccentricity greater than , mainly in the geostationary transfer orbit ( gto ) .these are satellites , rocket bodies or any kind of space debris . for several years, the computation of trajectories is very well controlled numerically .numerical methods are preferred mainly for their convenience and accuracy , especially when making comparisons with respect to the observations or their flexibility whatever the perturbation to be treated .conversely , analytical theories optimize the speed of calculations , allow to study precisely the dynamics of an object or to study particular classes of useful orbits .however , the calculation of the heo can still be greatly improved , especially as regards the analytical theories .indeed , when we are dealing with this type of orbit , we have to face several difficulties . due to the fact that they cover a wide range of altitudes , the classification of the perturbations acting on an artificial satellite , space debris , etc .( see * ? ? ?* ) changes with the position on the orbit . at low altitude ,the quadrupole moment is the dominant perturbation , while at high - altitude the lunisolar perturbations can reach or exceed the order of the effect .one of the issues concerns the expansion of the third - body disturbing function in orbital elements .the importance of the lunisolar perturbations in the determination on the motion of an artificial satellite was raised by . using a disturbing function truncated to the second degree in the spherical harmonic expansion, he showed that certain long - periodic terms generate large perturbations on the orbital elements , and therefore , the lifetime of a satellite can be greatly affected .later , took into account the third harmonic . introduced the inclination and eccentricity special functions , fundamental for the analysis of the perturbations of a satellite orbit .this enabled him to give in the first general expression of the third - body disturbing function using equatorial elements for the satellite and the disturbing body ; the function is expanded using fourier series in terms of the mean anomaly and the so - called hansen coefficients depending on the eccentricity in order to obtain perturbations fully expressed in orbital elements .it was noticed by that , concerning the moon , it is more suitable to parametrize its motion in ecliptic elements rather than in equatorial elements . indeed , in this frame , the inclination of the moon is roughly constant and the longitude of its right ascending node can be considered as linear with respect to time . in light of this observation , established the disturbing function of an earth s satellite due to the moon s attraction , using the ecliptic elements for the latter and the equatorial elements for the satellite .some algebraic errors have been noticed in , but it is only recently that the expression has been corrected and verified in .the main limitation of these papers is that they suppose truncations from a certain order in eccentricity . generally , the truncation is not explicit because there is no explicit expansion in power of the eccentricity .but in practice , fourier series of the mean anomaly which converge slowly must be truncated and this relies mainly on the dalembert rule ( see * ? ? ?* ) which guarantees an accelerated convergence as long as the eccentricity is small . because this is indeed the case of numerous natural bodies or artificial satellites , these expansions of the disturbing function are well suited in many situationshowever , for the orbits of artificial satellites having very high eccentricities , any truncation with respect to the eccentricity is prohibited . investigated this situation .they showed that the series in multiples of the elliptic anomaly , first introduced by and studied later by , converge faster than the series in multiples of any classical anomaly in many cases .this was confirmed by .unfortunately , the introduction of the elliptic anomaly increases seriously the complexity , involving in particular elliptical functions ( see e.g. * ? ? ?in the same paper , they provided the expressions of the fourier coefficients and in terms of hypergeometric functions , coming from the fourier series expansion of the elliptic motion functions in terms of the true anomaly and of the eccentric anomaly , respectively .more discussions and examples can be found in . on the other hand, the expansion must be supple enough to define a trade - off between accuracy and complexity for each situation .to this end , the use of special functions is well suited to build a closed - form analytical model , like in the theory of for a lunar artificial satellite .development can be compact , easy to manipulate and the extension of the theory can be chosen for each case by fixing the limits on the summations .the complexity is relegated in the special functions , knowing that efficient algorithms exist to compute them . in short, we shall use the expression of the disturbing function introduced in and , mixing mainly the compactness of formulation in exponential form and the convergence of series in eccentric anomaly .besides the question of large eccentricities , the other issue concerns the explicit time - dependency due to the motion of the disturbing body . in the classical analytical theory , this is almost always ignored ( see e.g. * ? ? ?* ) while it should be taken into account when constructing an analytical solution , in particular by means of canonical transformations .to do this , the key point is to start from a disturbing function using angular variables which are time linear .this is precisely the motivation to use ecliptic elements instead of equatorial elements for the moon perturbation , as explained above . in this situation , the pde ( partial differential equation ) that we have to solve to construct an analytical theory takes the following form: unfortunately, this mechanics is broken as soon as the fast variable of the satellite motion is no longer the mean anomaly , but the eccentric anomaly . in this case , the equation to solve looks like which admits no exact solution . in this work, we present a closed - form analytical perturbative theory for highly elliptical orbits overcoming all these limitations . only the effect and the third - body perturbations will be considered .the paper is organized as follows . in section[ sec : ham_sys ] , we define the hamiltonian system and we focus on the development of the third - body disturbing function . in section[ sec : lie_trans_approach ] , we expose the procedure to normalize the system combining the brouwer s approach and the lie - deprit algorithm including the time dependence .section [ sec : lie_transforms_principle ] is devoted to the determination of generating functions to eliminate the short and long periodic terms due to the lunisolar perturbations ( moon and sun ) .especially , we will see how to solve pde such as by using an iterative process . in section[ sec : solution_complete ] , we present the complete solution to propagate the orbit at any date : transformations between the mean and osculating elements are given . finally , numerical tests are carried out in section [ sec : num_tests ] to evaluate the performances of our analytical solution .in an inertial geocentric reference frame , we consider the perturbations acting on the keplerian motion of an artificial terrestrial satellite ( or space debris ) , induced by the quadrupole moment of the earth and the point - mass gravitational attraction due to the moon ( ) and sun ( ) .the motion equations of the satellite derived from the potential : where is the acceleration vector of the satellite , the gradient operator .the first two terms of the potential are related to the earth s gravity field , with the keplerian term: and the disturbing potential due to the earth oblatness: where is the satellite s radial distance and its latitude , the geocentric gravitational constant , the mean equatorial radius of the earth and are the legendre polynomials of degree defined for }}} ] the secular variations related to each perturbative term of the analytical theory : , , and .their expression are given below .note that , as far as we know , it s the first time that a compact and general relation to compute the secular terms at any degree is proposed for the moon and sun .[ [ j_2xspace - effect ] ] effect + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + given that the normalized hamiltonians for and for are similar to , the secular variations are given , respectively , as [ eq : es_j2 ] and [ eq : es_j2c ] \ ; , \\\begin{split } \label{eq : def_wg_j2c } \omega_{g , { j_{2}\xspace}^{2 } } & = { } \frac{3}{2 } \omega_{0 } \gamma_{2}^{2 } \left [ - 2 \left ( 1 - 5 c^{2 } \right ) \left ( 5 + 43 c^{2 } \right ) + 24 \eta \left ( 1 - 3 c^{2 } \right ) \left ( 1 - 5 c^{2 } \right ) \right .. \hspace{1em } - e^{2 } \left ( 25 - 126 c^{2 } + 45 c^{4}\right ) \right ] \ ; , \end{split } \\\label{eq : def_wh_j2c } \omega_{h , { j_{2}\xspace}^{2 } } & = { } \frac{3}{2 } \omega_{0 } \gamma_{2}^{2 } c \left[ 4 \left ( 1 - 10 c^{2 } \right ) + 12 \eta \left ( 1 - 3 c^{2 } \right ) - e^{2 } \left ( 9 - 5 c^{2 } \right ) \right ] \;. \end{aligned}\ ] ] [ [ moon - and - sun - perturbations ] ] moon and sun perturbations + + + + + + + + + + + + + + + + + + + + + + + + + + + consider that the secular part of the lunar perturbations and the solar perturbations can be written with {3b = \astrosun}^{3b = \leftmoon } \;,\ ] ] then , we have : [ eq : dkmoonsdx ] + for ( or ) , we find for the moon case [ eq : es_moon ] and for the sun case [ eq : es_sun ] if the mean elements are known , we can propagate the equation of motions at any instant . beginning to add the long - periodic terms thanks to , the new variables can be expressed in lie series as by proceeding in the same way as in the inverse transformation case , if we consider a canonical transformation up to the order 2 and we discard the terms , we get with . hence , add the short - period variations modeled by to gives the osculating elements , solution of the dynamical system : all the derivatives with respect to keplerian elements involved in the lie operator are defined in appendix [ anx : proof_rec ] .in this section , we present some numerical tests to show abilities of the theory . the complete analytical solution described in section [ sec : solution_complete ]was implemented in fortran 90 program apheo ( analytical propagator for highly elliptical orbits ) .all the numerical tests have been realized with the object sylda , an ariane 5 debris in geostationary transfer orbit ( gto ) .the initial orbital elements are given in table [ fig : tle ] , with a semi - axis major km , eccentricity and inclination , perigee altitude km and apogee altitude km , and an orbital period of h. ariane 5 deb [ sylda ] 1 40274u 14062d 14313.65939750 .00023668 00000 - 0 92879 - 2 0 135 2 40274 5.9570 168.6919 7263810 197.5825 109.5543 2.29386099 532 in table [ tab : eff_sec ] we give the values of the secular effects on the satellite s angular variables induced by the effect ( eq . [ eq : es_j2][eq : es_j2c ] ) and the luni - solar perturbations ( eq . [ eq : dkmoonsdx ] truncated at the degree 4 ) , computed from the initial osculating elements .* 1r1em| s[table - format = -2.5e1 ] * 1r2em s[table - format = -4.5e1 ] * 1r2em s[table - format = -4.5e1 ] * 1r2em s[table - format = -4.5e1 ] * 1r2em & keplerian & & & & sun & & moon & + ( r)1 - 1 ( l)2 - 3 ( l)4 - 5 ( l)6 - 7 ( l)8 - 9 _ h & 0 & & -0.833774995391e-07 & & -0.352535863831e-09 & & -0.772650652420e-09 & + t_h & & & -872.20 & & -564.77 & & -257.69 & + _ g & 0 & & 0.165449887355e-06 & & 0.442584087739e-09 & & 0.969432099980e-09 & + t_g & & & 439.54 & & 449.86 & & 205.380 & + _ l & 0.166814278636e-03 & & 0.566636363022e-07 & & -0.382764304828e-09 & & -0.836496682109e-09 & + t_l & 10.4627 & & 1283.4 & & -520.17 & & -238.02 & + in this part , we have sought to evaluate the degree of validity of our analytical model related to each external disturbing body , sketched in figure [ fig : diag ] . as reference solution, we have integrated the motion equations defined in using a fixed step variational integrator at the order 6 .it is based on a runge - kutta nystrm method , fully described in the thesis .this kind of integrators are well - adapted for high elliptical orbits and numerical propagation over long periods . for more details about the variational integrator ,see e.g. , , , .for both analytical and numerical propagations , we have assumed that the apparent motion for each disturbing body can be parametrized by a linear precessing model ( see section [ ssec : dyn_model ] ) .the fourier series in multiple of the mean anomaly are expanded up to the order , which is quite enough for external bodies such as the moon and sun .[ [ perturbations - related - to - the - sun ] ] perturbations related to the sun + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + let us consider the perturbations of and the sun .since the variations of are proportional to , it is enough to expand the series up to , so .the parameter is kept zero here because the short periodic corrections involved by the time dependence are very small .indeed , we have for example for only few meters in rms for the first order correction and a few centimeters beyond , to be compared to the km of the analytical solution plotted in figure [ fig : propa_ana_j2o2_0s_3 - 0 - 4_3x2 ] .this permits us also to reduce considerably the time computation without loose in stability and accuracy . in figure[ fig : erreurs_j2o2_s0_3 - 0 - 4_3x2 ] , we show that the analytical model fits the numerical solution quite well .the main source of errors is the computation of the mean elements from the initial osculating elements , which is truncated in our work at the order 1 in .if we apply the direct - inverse change of variables on the elements , which corresponds to steps ( 1 ) and ( 3 ) of the figure [ fig : diag ] , the resulting new initial elements noted differ by a quantity that is not null .this is why the errors on the metric elements are not centered on zero .this yields a phase error increasing the amplitude of the error during the propagation as we can see clearly on or .the problem is slighty different for the angular variables .the small remaining slopes result from the approximation of the secular effects due to : 1 .we have used the brouwer s expressions expanded up to , so we have not totally all the contribution of compared to the numerical solution ; 2 .the secular terms are evaluated from the mean elements at step ( 2 ) .[ [ perturbations - related - to - the - moon ] ] perturbations related to the moon + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + similar tests have been done with the moon in figure [ fig : j2o2_s0_3 - 0 - 4_3x2 ] . because , it is necessary here to develop the disturbing function up to at least to to improve significantly the solution , see figure [ fig : erreurs_j2o2_0m_2 - 0 - 4_ihg ] .we remark that the modeling errors are more important than for the sun , particularly on the long periodic part of , and .this is not surprising since the motion of the moon is both faster and more complicated than the motion of the sun .+ + we have evaluated the contribution of the explicit time dependence due to the third body motion , modeled by the generating function and the corrections . in figure[ fig : erreurs_j2o2_lunisol_3 - 0 - 4_hg_wt ] , we have performed similar tests than in the previously one , but with . by comparing the errors with the results in figures [ fig : erreurs_j2o2_s0_3 - 0 - 4_3x2 ] and [ fig :erreurs_j2o2_0m_4 - 0 - 4_3x2 ] , we can see that taking into account the time dependence permits to reduce the drift rate up to a factor of 3 .another way to evaluate the performance of our analytical propagator is to apply on a set of osculating elements an inverse transformation , then a direct transformation , and to verify that we find the identity .figure [ fig : err_pos_j2o2_sm_merge_304e726 ] is a sample plot of the behavior of the relative errors in position due to the successive transformations of the initial osculating elements illustrated in figure [ fig : diag ] , against inclination .other parameters remain the same . for more clarity ,results for the sun and moon have been computed separately and the relative error is defined by with and denoting respectively the rectangular coordinates before and after the transformation of the elements .as we can seen , the change of variables is very sensitive to the inclination .the peaks correspond to a resonant term , that the theory does not deal with . by collecting the resonant frequencies in apheo satisfying the conditions: we were able to identify the set of resonances given in up to for this test .the construction of an analytical theory of the third - body perturbations in case of highly elliptical orbits is facing several difficulties .in term of the mean anomaly , the fourier series converge slowly , whereas the disturbing function is time dependent .each of these difficulties can be solved separately with more - or - less classical methods .concerning the first issue , it is already known that the fourier series in multiple of eccentric anomaly are finite series .their use in an analytical theory is less simple than classical series in multiple of the mean anomaly , but remains tractable .the time dependence is not a great difficulty , only a complication : after having introduced the appropriate ( time linear ) angular variables in the disturbing function , these variables must be taken into account in the pde to solve during the construction of the theory .however , combining the two problems ( expansion in terms of the eccentric anomaly and time dependence ) in the same theory is a more serious issue . in particular, solving the pde in order to express the short periodic terms generating function is not trivial . in this workwe have proposed two ways : * using an appropriated development of the disturbing function involving the fourier series with respect to the eccentric anomaly ; * computing the solution of the pde by means of an iterative process , which is equivalent to a development of a generator in power series of a small ratio of angular frequencies . these allowed us to get a compact solution using special functions .the main advantage is that the degree of approximation of the solution ( e.g. the truncation of the development in spherical harmonics and the number of iterations in the resolution of can be chosen by the user as needed and not fixed once and for all when constructing the theory .v. & broucke r. ( 1980 ) : ., * 21*,500 357360 .d. ( 1959 ) : . , * 64*,500378 .d. & clemence g. m. ( 1961 ) : . academic press , new york and london .e. v. & fukushima t. ( 1994 ) : expansions of elliptic motion based on elliptic function theory ., * 60*,5006989 .v. a. ( 1995 ) : .springer .v. a. & brumberg e. v. ( 1999 ) : , advances in astronomy and astrophysics , vol .3 . gordon and breach .de saedeleer b. ( 2006 ) : .phd thesis , fundp .a. ( 1969 ) : . , * 1*,5001230 .a. c. ( 1894 ) : .macmillan & company .w. m. ( 2009 ) : ., * 103*,500105118 .w. m. & bertschinger e. ( 2007 ) : ., * 663*,500 14201433 . e. m. ( 1973 ) : . ,* 353*. g. e. o. ( 1974 ) : . , * 9*,500 . g. e. o. & bura m. ( 1980 ) : . , * 24*,500111 .r. h. & wagner c. a. ( 2008 ) : ., * 101*,500247272 . r. h. & wagner c. a. ( 2010 ) : ., * 108*,50095106 . p. a. ( 1853 ) : .bei s. hirzel , leipzig .s. ( 1980 ) : ., * 372*,500243264 .i. g. ( 1964 ) : . , * 69*,500 26212630 .g. & bond v. r. ( 1980 ) : ., * 80*,50022386 .b. ( 1965 ) : . , * 10*,500 141145 .w. m. ( 1961 ) : ., * 5*,500 104133 . w. m. ( 1962 ) : development of the lunar and solar disturbing functions for a close satellite ., * 67*,500300303 . w. m. ( 1966 ) : .waltham , mass .: blaisdell .s. a. , vakhidov a. a. & vasiliev n. n. ( 1997 ) : ., * 68*,500257272 .y. ( 1959 ) : ., * 22*. y. ( 1962 ) : . , * 67*,500446 . y. ( 1966 ) : . , * 235*. m. t. ( 1989 ) : ., * 46*,500287305 .j. ( 2005 ) : ., * 91*,500351356 .g. ( 2013 ) : .phd thesis , observatoire de paris .lion g. & mtris g. ( 2013 ) : two algorithms to compute hansen - like coefficients with respect to the eccentric anomaly ., * 51*(1),500 19 .g. , mtris g. & deleflie f. ( 2012 ) : . towards an analytical theory of the third - body problem for highly elliptical orbits . in _ proceedings of the international symposium on orbit propagation and determination , arxiv 1605.07901_. marsden j. e. & west m. ( 2001 ) : discrete mechanics and variational integrators ., * 10*,500 357514. mtris g. ( 1991 ) : .thse , observatoire de paris .montenbruck o. & gill e. ( 2000 ) : , physics and astronomy online library .physics and astronomy online library .springer .murray c. & dermott s. ( 1999 ) : .cambridge university press .p. , bailie a. & upton e. ( 1961 ) : .development of the lunar and solar disturbing functions for a close satellite .technical note d-494 , nasa .p. ( 1977 ) : ., * 16*,500 309313 . h. c. ( 1960 ) : .new york : dover publication , 1960 .c. w. t. , vadali s. r. & alfriend k. t. ( 2013 ) : third - body perturbation effects on satellite formations . ,* 147*. j. l. , bretagnon p. , chapront j. , chapront - touze m. , francou g. & laskar j. ( 1994 ) : ., * 282*,500 663683 . n. ( 1992 ) : representation coefficients and their use in satellite geodesy ., * 17*(2),500 117123 . f. ( 1889 ) : .paris , gauthier - villars et fils .west m. ( 2004 ) : .phd thesis , california institute of technology .e. p. ( 1959 ) : .academic press , new york and london .begin to expand the disturbing function due to zonal harmonics in hill - whittaker variables , \ ] ] with the standard inclination functions related to the kaula s inclination functions } } { f}_{n,0,p } ( i) ] .indeed , they are null for , with and . more over , we can deduce from the properties: hence , rewriting as \\ & { \mathcal{a}}_{n , p , q } = { } \frac{\mu}{a } \fracp*{r_{\varoplus}}{a}^{n } j_{n } \ , \frac{1}{\eta } y_{q}^{-n+1,0}(e ) \ , f_{n,0,p } ( i ) \end{aligned}\ ] ] the secular part is and the periodic part { \mathcal{a}}_{n , p , q } \\ & \hspace{1em } \times \cos \left [ ( q+n-2p ) \nu + ( n-2p ) g + n \frac{\pi}{2 } \right ] \end{split}\ ] ] from the last equation , it is easy to show that the generating function modeling the short periods term due to the zonal harmonic at the order one can be given by:+ \delta_{0}^{n-2p } \delta_{0}^{q } \phi \right\rbrace \end{split}\end{aligned}\ ] ] with the equation of the center .we can now proceed to the computation of the mean value of with respect to the mean anomaly needed in the coupling term . because contains purely periodic terms , so , the only contribution comes from the averaging over of the trigonometric term . by isolating and , we get as sine is an odd function and according to the definition , equation reduces to the simple value: hence , us prove that if the solution works for the order , then it works for the order . inserting into leads to \ ; , \end{aligned}\ ] ] + with [ eq:106 ] we ensure that contains no terms independent of : and so . finally , we derive from the integration of the correction at the order : this part , we give all the partial derivatives with respect to the keplerian elements of the generating functions , , and , required in the canonical transformations .derivatives of with respect to the kelperian elements are those given in : [ eq : ds_lgh ] + 3 s^2 \fracp*{a}{r}^{3 } \cos ( 2 g + 2\nu ) \right\rbrace \ ; , \\\label{eq : ds1sdg } { \dfrac{\partial^{\null}{{\mathcal{v}}_{1,{j_{2}\xspace}}}}{\partial{g}^{\null } } } & = 6 \gamma_{2 } s^2 g \left [ \cos ( 2 g + 2\nu ) + e \cos ( 2 g + \nu ) + \frac{e}{3 } \cos ( 2 g + 3\nu ) \right ] \ ; , \\\label{eq : ds1sdh } { \dfrac{\partial^{\null}{{\mathcal{v}}_{1,{j_{2}\xspace}}}}{\partial{h}^{\null } } } & = 0 \\\label{eq : ds1sda } { \dfrac{\partial^{\null}{{\mathcal{v}}_{1,{j_{2}\xspace}}}}{\partial{a}^{\null } } } & = { - } 2 \frac{\gamma_{2}}{a } { \mathcal{v}}_{1,{j_{2}\xspace } } \ ; , \\\label{eq : ds1sde } \begin{split } { \dfrac{\partial^{\null}{{\mathcal{v}}_{1,{j_{2}\xspace}}}}{\partial{e}^{\null } } } & = { } \gamma_{2 } g \bigg\ { 2 \left ( -1 + 3 c^{2 } \right ) \left ( \gamma + 1 \right ) \sin\nu \ ; , \\ & \hspace{1em } - 3 s^2 \left [ \left ( \gamma - 1 \right ) \sin ( 2 g + \nu ) - \left ( \gamma + \frac{1}{3 } \right ) \sin ( 2 g + 3\nu ) \right ] \bigg\ } \ ; , \end{split } \\\label{eq : ds1sdi } \begin{split } { \dfrac{\partial^{\null}{{\mathcal{v}}_{1,{j_{2}\xspace}}}}{\partial{i}^{\null } } } & = 6 \gamma_{2 } g c s \left [ -2 \left ( \phi + e\sin\nu \right ) \right . \\ & \left .\hspace{1em } + \sin ( 2 g + 2\nu ) + e \sin ( 2 g + \nu ) + \frac{e}{3 } \sin ( 2 g + 3\nu ) \right ] \;.\end{split } \end{aligned}\ ] ] + with .since is only independent of and , [ eq : dt_lgh ] note that these relations yield to those of for .the generating function is independent of and .we have : [ eq : dwcoup ] \gamma_{2}\,s\,g\,\gamma_{3 } \sin 2 g \ ; , \\ \label{eq : dwcsde } { \dfrac{\partial^{\null}{w_{coup}}}{\partial{e}^{\null } } } & = { } \frac{2}{\varpi_{g } } \left [ \left ( { \dfrac{\partial^{\null}{\omega_{g,3b}}}{\partial{e}^{\null } } } - \frac{\omega_{g,3b}}{\varpi_{g } } { \dfrac{\partial^{\null}{\varpi_{g}}}{\partial{e}^{\null } } } \right ) \gamma_{3 } + e \frac{\omega_{g,3b}}{\varpi_{g } } \left ( \frac{2 ( 2+\eta)}{\left(1+\eta\right)^2 } + \frac{3}{\eta^2 } \right ) \right ] \gamma_{2}\,g\,s \sin 2 g \ ; , \\\label{eq : dwcsdi } { \dfrac{\partial^{\null}{w_{coup}}}{\partial{i}^{\null } } } & = { } \frac{2}{\varpi_{g } } \left [ s { \dfrac{\partial^{\null}{\omega_{g,3b}}}{\partial{i}^{\null } } } - \frac{\omega_{g,3b}}{\varpi_{g } } \left ( s { \dfrac{\partial^{\null}{\varpi_{g}}}{\partial{i}^{\null } } } - c \right ) \right ] \gamma_{2 } g\,\gamma_{3 } \sin 2 g \ ; , \\\label{eq : dwcsdg } { \dfrac{\partial^{\null}{w_{coup}}}{\partial{g}^{\null } } } & = 4 \gamma_{2}\,s\,g\,\gamma_{3 } \frac{\omega_{g,3b}}{\varpi_{g } } \cos 2 g \ ; , \\\label{eq : dwcsdh } { \dfrac{\partial^{\null}{w_{coup}}}{\partial{h}^{\null } } } & = { \dfrac{\partial^{\null}{w_{coup}}}{\partial{l}^{\null } } } = { } 0 \ ; , \end{aligned}\ ] ] with since we have chosen to represent by a series ( see ): these derivatives are deduced from .[ [ derivatives - ofmathcalvsigma ] ] derivatives of + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + from and , we get since our generating functions involves the satellite s eccentric anomaly , the -derivative is for the metric elements , given that depends on both explicitly and implicitly through , with use of \ ; , \end{aligned}\ ] ] we obtain \exp{\mathop{\mathrm{\imath}}\nolimits}\theta_{n,\ldots , q+s , q ' } \;. \end{split } \end{aligned}\ ] ] [ [ derivatives - ofmathcalwidetildea_nldotsqqsigma ] ] derivatives of + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + from , we can compute derivatives of by recurrence : with \ ] ] the differential of and are given by : with the partial derivatives of , and defined in the appendice [ anx : derivative_pulsations ] . [ [ derivatives - ofzeta_qssigma ] ] derivatives of + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + derivatives of can be computed by means of recurrence relation . using for , we get and for concerning the initialization , according to , we have [ eq:130 ] alignat=6 & = 0 & & , = ^q_1 & & , = ^q_-1 & & , q 0 + & = 0 & & , = - & & , = & & , q=0 let us pose such that the generating function eliminating the long - periodic terms writes : the partial derivatives with respect to are simple to obtain : [ eq : dwlpsdx ] while those with respect to the metric elements require more attention : with derivatives of are [ eq : dadx ] and for : where and defined in .partial derivatives of the pulsations are established in appendice [ anx : derivative_pulsations ] .derivatives of with respect to are denoting , we have [ eq : djsdx ] given that , derivatives of the pulsation can be written we give in table [ tab : derive_eff_sec_eo ] the derivatives of mean motion and secular variations due to .those associated to the secular part of the third body , , can be determined by using the expression and the partial derivatives * 1k5em * 1k12em * 1k8em * 1k6em & & & + ( r)1 - 1 ( l ) 2 - 4 ^_2/^ & -2 & 4 _ 2 & 0 + ( r)1 - 1 ( l ) 2 - 4 ^_0/^ & - & 0 & 0 + ( r)1 - 1 ( l ) 2 - 4 ^_l , j_2/^ & - 1 + 14_2 ( 1 - 3c^2 ) & 18_0 _ 2 ( 1 - 3c^2 ) & 36_0 _ 2 c s + ^_g , j_2/^ & - 21 _ 2 ( 1 - 5c^2 ) & 24_0 _ 2 ( 1 - 5c^2 ) & 60_0 _ 2 c s + ^_h , j_2/^ & - 42 _ 2 c & 48_0 _ 2 c & - 12_0 _ 2 s +in this appendix , we present a method to convert the determining functions related to the disturbing body from exponential to trigonometric form .the method is similar to that we have used in ( * ? ? ?* see section 3 ) .since this kind of transformation is tedious but can easily lead to algebraic errors , we give the main results to establish the trigonometric expression of the moon s long - periodic and the short - periodic generating function ( much harder than for the sun ) . to begin , the eccentricity functions : , , , and the inclination functions : , , admit several symmetries . particularly , we have for the following properties : [ eq : fonc_spe_ex_sym ] \\\label{eq : xnms_sym_a } x_{-s}^{n ,- m } ( e ) & = { } x_{s}^{n , m } ( e ) & & \ ; , \quad \left [ n , m , s \in { { \mathbb{z}}}\right ] \\ \label{eq : zeta_sym_a } \zeta_{-q ,-s}^{(\sigma ) } ( e ) & = { } ( -1)^{\sigma } ( 1 - 2\delta_{0}^{q } ) \zeta_{q , s}^{(\sigma ) } ( e ) & & \ ; , \quad \left [ q , s \in { { \mathbb{z } } } ; \ , \sigma \in { { \mathbb{n}}}\right ] \end{aligned}\ ] ] and [ eq : fonc_spe_inc_sym ] \\ \label{eq : unmk_sym_a } u_{n ,- m ,- s } ( { \text{\usefont{oml}{cmr}{m}{n}\symbol{15 } } } ) & = { } ( -1)^{s - m } u_{n , m , s } ( { \text{\usefont{oml}{cmr}{m}{n}\symbol{15 } } } ) & & \ ; , \quad \left [ n \in { { \mathbb{n } } } ; \ , m , s \in { { \mathbb{z}}}\right ] \end{aligned}\ ] ] note that the last symmetry can be obtained from and the relation ( e.g. * ? ? ? * ; * ? ? ? * ) consider now three polynomial functions , and defined by [ eq : def_fandg ] with the and some arbitrary real constants .+ there results that we have the symmetries [ eq : sym_fandg ] in this way , we can deduce easily from the table of correspondence [ tab : def_behaviour_func ] the symmetries with respect to the indices of the functions involved in the development of our determining functions .* 1r1em * 1l6em@*1c@*1r6em * 1l5em@*1c@*1r5em & & + ( r)1 - 1 ( lr)2 - 4 ( l ) 5 - 7 ^ & _ n , m , p , q & & f_n , m , p , q & _ n , m , p , q & & f_n , m , p , q + ^ & _ n , m,p,q^ & & f_n , m,p,q^ & _ n , m , p,q^ & & f_n , m , p,q^ + ( r)1 - 1 ( lr)2 - 4 ( l ) 5 - 7 & _ n , m , m,p , p,q , q^ & & g_n , m , m,p , p,q , q^ & _ n , m , p , p,q , q^ & & g_n , m , m , p , p,q , q^ + & _ n , m , m,p , p,q^ & & g_n , m , m,p , p,0,q^ & _ n , m , p , p,q^ & & g_n , m , m , p , p,0,q^ + the main steps to convert an exponential expression to trigonometric form are outlined below : 1 .split the sum over into two parts such that runs from to . to avoid double counting of , we must introduce the factor .proceed the same if there is a summation over ; 2 . for each terms , if the second index of is negative , change the indices by , by and by if this is involved .same for , replace by and by .3 . substitute each inclination functions having a negative value as a second index by their symmetry relations given in ; 4 .substitute each eccentricity function having a negative value as a third index by their symmetry relations given in ; 5 . with the help of table [ tab : def_behaviour_func ] , subsitute each function and by their associated symmetry if the second index is negative ; 6 . isolate the terms with the same phase , then factorize and convert the exponentials to trigonometric form . starting from the generating function defined in and applying the step , we have \end{split } \end{aligned}\ ] ] + with .note that the symbols and are not affected by the changes of sign .+ step gives \end{split}\ ] ] and it results from the steps to : \end{split } \end{footnotesize}\ ] ] making appear the sine , we get the trigonometric development .starting from the generating function defined in , step gives \end{split } \end{scriptsize}\ ] ] focus now our attention on step , and particularly on the coefficient .as is formulated so that it can automatically handle cases for which and , we must to slightly modify this element if we want to effectively use the symmetry relations after changing by and to keep a compact form .make this change for is not a problem and the coefficient can be rewritten in the form .however , this trick for can not work because we would get the value , while the expected value is . to restore the correct sign, we make appear the factor , without consequence on the final result . in fact , this factor was not choose by chance .this will be offset with the factor related to the -functions . to sum up ,when we apply the change of indice by on the relevant members of , we also need to make the following substitution : alignat=1 [ eq:148 ] 1 & , q=0 + - & , q 0 and we find at step : \;.\end{split } \end{fssizeadapt}\ ] ] then , performing step to we get \right . \\ { } & { } \left .\hspace{2em } { } + ( -1)^{n - m ' } \overline\varepsilon_{}^{\,+,\sigma } u_{n , m ,- m'}(\varepsilon ) \left [ \exp { \mathop{\mathrm{\imath}}\nolimits}\theta_{}^{+ } -\exp \left ( -{\mathop{\mathrm{\imath}}\nolimits}\theta_{}^{+}\right ) \right ] \right\rbrace \end{split}\ ] ] which is equivalent to .* 1k5em * 1s[table - format = -11.15e2 ]
|
traditional analytical theories of celestial mechanics are not well - adapted when dealing with highly elliptical orbits . on the one hand , analytical solutions are quite generally expanded into power series of the eccentricity and so limited to quasi - circular orbits . on the other hand , the time - dependency due to the motion of the third body ( e.g. moon and sun ) is almost always neglected . we propose several tools to overcome these limitations . firstly , we have expanded the third - body disturbing function into a finite polynomial using fourier series in multiple of the satellite s eccentric anomaly ( instead of the mean anomaly ) and involving hansen - like coefficients . next , by combining the classical brouwer - von zeipel procedure and the time - dependent lie - deprit transforms , we have performed a normalization of the expanded hamiltonian in order to eliminate all the periodic terms . one of the benefits is that the original brouwer solution for is not modified . the main difficulty lies in the fact that the generating functions of the transformation must be computed by solving a partial differential equation , involving derivatives with respect to the mean anomaly , which appears implicitly in the perturbation . we present a method to solve this equation by means of an iterative process . finally we have obtained an analytical tool useful for the mission analysis , allowing to propagate the osculating motion of objects on highly elliptical orbits ( e > 0.6 ) over long periods efficiently with very high accuracy , or to determine initial elements or mean elements . comparisons between the complete solution and the numerical simulations will be presented . = -1pt * keywords . * highly elliptical orbits ; satellite ; analytical theory ; third - body ; time - dependence ; closed - form ; lie transforms .
|
the radio access networks ( rans ) are facing great challenges in the mobile internet era . on one hand ,next - generation rans are expected to support 1000 times increased data traffic with limited spectrum and energy .therefore , both spectrum efficiency and energy efficiency should be vastly improved . on the other hand , emerging applications and services put forward increasingly diverse requirements for connections in terms of capacity , latency , and reliability .hence , next - generation rans need to be sufficiently flexible to accommodate evolving applications . to tackle these challenges , software - defined networking ( sdn )have been proposed to renovate rans .softran introduced a logically centralized big base station ( bs ) to globally optimize the ran resources so that the network efficiency can be improved .the control - data separation concept of sdn was extended to decouple the control and data coverages at the air interface in rans . with the decoupled air interface ,bs sleeping control can be effectively employed to reduce energy waste by adapting to real traffic dynamics and improve the network energy efficiency without generating coverage holes .moreover , concert proposed to deploy rans as software - defined services , which can dramatically improve the flexibility of bs operations .however , these studies focus on the architecture design only , and leave the protocol design as an open research issue . besides design, it is of much research interest to evaluate the sdn concepts in practical ran implementations .our previous work presented a prototype system which demonstrated the feasibility of decoupled air interface on top of the gsm standard .but only a single type of service ( namely voice calls ) was investigated , and no attempt to realize dynamic bs operations was made .pran showed a preliminary implementation of dynamic resource allocation of bss with centralized ran schedulers .however , other bs operations such as bs sleeping were not studied .to our knowledge , there are few efforts to evaluate the protocols of bs operations in software - defined rans ( sdrans ) with practical implementations . in this paper , we present the protocol design and evaluation of our sdran framework named hyper - cellular networks ( hycell ) .hycell enables globally resource - optimized and energy - efficient ( green ) bs operations by exploiting the decoupled air interface , centralized bs control , and software - defined bs functions . to design the decoupled air interface ,we take an evolutionary approach and propose our separation scheme for current 3gpp standards , which is beneficial for network migration .based on that , we design our bs dispatching protocol , which determines and assigns the globally optimal bs to serve the user requests , as well as our bs sleeping protocol. moreover , we prototype a hycell testbed on a software - defined radio ( sdr ) platform , and use it to evaluate our design .the main contributions of our work are summarized as follows : 1 .we propose a separation scheme to realize the decoupled air interface for existing 3gpp standards from the aspects of both network functionalities and logical channels , and demonstrate its feasibility in the testbed .we design a bs dispatching protocol for global optimization of network resources , and implement a bs dispatching scheme to effectively achieve load balancing among multiple bss .we design a bs sleeping protocol to improve the network energy efficiency , and present an implementation with a threshold - based algorithm , which shows about 60% energy saving gain in our testbed .the rest of this paper is organized as follows .we first give an overview of the hycell architecture as well as the challenges and solutions to realize it in section [ sec : overview ] .then we present the design of the separation scheme , bs dispatching , and bs sleeping in section [ sec : design ] .evaluation of the testbed implementation is given in section [ sec : eval ] .section [ sec : con ] concludes the paper .as illustrated in fig .[ fig : arch ] , hycell decouples the air interface of the ran by separating the control coverage from the traffic coverage with two types of bss .control bss ( cbss ) provide the control coverage , while traffic bss ( tbss ) provide the traffic coverage .typically cbss have large coverage areas , within which multiple traffic bss ( tbss ) are deployed .cbss provide the network control to underlying tbss and mobile users . in particular , they grant the network access to mobile users . to guarantee a basic level of data service and to cope with high mobility users , cbss can also be used to provide low - rate data services such as voice call service to the ue . unlike cbss , tbss are only responsible for high - rate data services . specifically , there can be different subtypes of tbss to support different classes of high - rate data services . through the separation , cbss and tbss can be more simplified than conventional bss . under the separation architecture , cbss provide centralized control , and enable dynamic operations of tbss .cbss and tbss serve mobile users collaboratively .when the user equipment ( ue ) is powered on , it searches for nearby cbss and registers to the network through a cbs .cbss gather network state information such as ue locations and traffic load from the mobile users and tbss underlying its coverage , and thus hold a global view of the network .when high - rate data services are required , the ue sends requests to the cbs and the cbs dispatches one or more tbss for the high - rate data transmission afterwards . when the network traffic load is light , tbss can be turned off under the command of the cbs to reduce the energy consumption of the network while the cbs preserves the network coverage .moreover , hycell decouples the software which realizes the bs functions from the hardware which converts signals between baseband and radio frequency . in this way ,bs functions are software defined by high - level programming languages , and it is much easier to update the bss to handle new mobile internet applications and services .further , with the help of virtualization , bss can become virtual instances in cloud data centers .network efficiency can be improved with resource pooling , and bs operations can be more flexible with the help of virtual machine migration . with the decoupled air interface , centralized bs control , and software - defined bs functions , hycell enables green bs operations in sdrans .it leads to a promising path towards next - generation rans .we summarize the challenges of protocol design in hycell and our proposed solutions as follows .[ [ separation - of - air - interface ] ] separation of air interface + + + + + + + + + + + + + + + + + + + + + + + + + + + in this paper , we target at current air interface protocols and propose our separation scheme to design a decoupled air interface . however , it is a daunting task to analyze the air interface in existing 3gpp standards .the physical layer signals are difficult to categorize and the interactions among them are complicated , making it difficult to separate the control coverage from the traffic coverage . to tackle this challenge , we propose our separate scheme from the aspects of network functionalities and logical channels , rather than physical layer signals . in our design ,we jointly consider the two levels and ensure that the expected network functionalities of cbss and tbss can be mapped to their corresponding logical channels .this guides us to an effective separation scheme of the air interface .[ [ cbs - tbs - protocol - design ] ] cbs - tbs protocol design + + + + + + + + + + + + + + + + + + + + + + + the cbs - tbs protocols do not exist in current 3gpp standards .however , the protocols are crucial for the sdrans to optimize the network performance . specifically , we need a protocol to find the best tbss and assign them to serve the ue .we also need a protocol to dynamically switch off tbss to reduce energy waste and improve network energy efficiency .the signaling interactions in the protocols must carry sufficient network state information for global optimization . at the same time, it should ensure realtime decision and accommodate potential updates . to meet the requirements ,we choose simple but effective metrics of the network state information in message exchange .we also make our protocols extensible by adopting modular design .[ [ ue - transparency ] ] ue transparency + + + + + + + + + + + + + + + when designing hycell , we would like to guarantee that the updates at the bs side are transparent to the ue side . with ue transparency, the existing mobile terminals can work with hycell automatically .it brings the benefit of compatibility , which is appealing during the protocol evolution and network upgrade .however , it also limits the degree of freedom we have to realize the sdran . to achieve ue transparency , we carefully design the separation scheme of the air interface and make sure that the ue - facing interfaces are preserved .in addition , we exploit the backhaul connection for cbs - tbs communication and cooperation , which optimizes the network without the need of adding functions to mobile terminals .in this section we describe our key design aspects .first we analyze the air interface of 3gpp standards and propose a separation scheme for hycell. then a bs dispatching protocol design is proposed , and we also present a bs sleeping protocol which achieves great energy saving gain without sacrificing mobile users quality of service ..logical channel separation of 3gpp standards . [ cols="^,^,^,^,^ " , ] to show the energy saving performance of bs sleeping , we currently take a component analysis approach due to the lack of power measurement equipment .based on the specifications , we calculate the power consumption of pcs , usrps and the switch respectively and take the sum as the total power consumption of the testbed .table [ tab : sav ] compares the power consumption of different sleep modes in the scenario when there are no active user requests . without bs sleeping ( no sleep in the table ) , all bss are idle , and the total power consumption is .when we only turn off the usrps and software bss on pcs ( half sleep ) , the total power consumption is , that is , 16% power consumption can be saved .if we further power off the pcs ( full sleep ) , the total power can be reduced to , which translates to 60% energy savings .it is thus observed that bs sleeping can bring significant energy saving gain , of which the major part comes from the baseband power consumption .in this paper we present the design and evaluation of hycell .the proposed framework hycell exploits the decoupled air interface , centralized bs control , and software - defined bs functions to enable green bs operations in sdrans .we propose a separation scheme to realize the decoupled air interface , and further design the protocols of bs dispatching and bs sleeping .our testbed implementation validates the feasibility of the separation scheme for the gsm / gprs standard .further the test results show that bs dispatching can effectively achieve load - balancing , and bs sleeping can provide about 60% energy saving gain .we would like to thank yuyang wang and guangchao wang for their help in the evaluation .we would also like to thank dr .xu zhang , xueying guo , jingchu liu , and anonymous reviewers for their insightful comments .this work is sponsored in part by the national basic research program of china ( no .2012cb316001 ) , and the national natural science foundation of china ( nsfc ) under grant no .61201191 and 61401250 , the creative research groups of nsfc ( no . 61321061 ) , the sino - finnish joint research program of nsfc ( no . 61461136004 ) , and intel corporation . ,`` meeting the 1000x challenge : the need for spectrum , technology and policy innovations , '' white paper , may 2014 , ( condensed version ) .[ online ] .available : http://www.4gamericas.org/index.php/download_file/view/208/159/ a. gudipati , d. perry , l. e. li , and s. katti , `` softran : software defined radio access network , '' in _ proc .2nd acm sigcomm workshop hot topics in software defined networking ( hotsdn 13 ) _ , hong kong , china , aug .2013 , pp . 2530 .z. niu , s. zhou , s. zhou , x. zhong , and j. wang , `` energy efficiency and resource optimized hyper - cellular mobile communication system architecture and its technical challenges , '' _ scientia sinica informationis _ , vol .42 , no .10 , pp . 11911203 , 2012 , ( in chinese ) .a. capone , a. fonseca dos santos , i. filippini , and b. gloss , `` looking beyond green cellular networks , '' in _wireless on - demand network systems and services ( wons ) _ , jan .2012 , pp . 127130 .h. ishii , y. kishiyama , and h. takahashi , `` a novel architecture for lte - b : c - plane / u - plane split and phantom cell concept , '' in _ ieee globecom int .workshop emerging technologies for lte - advanced and beyond-4 g _ , dec .2012 , pp . 624630 .t. zhao , p. yang , h. pan , r. deng , s. zhou , and z. niu , `` software defined radio implementation of signaling splitting in hyper - cellular network , '' in _ proc .2nd workshop software radio implementation forum , in conjunction with acm sigcomm 2013 _ , hong kong , china , aug .2013 , pp . 8184 .s. zhang , j. wu , j. gong , s. zhou , and z. niu , `` energy - optimal probabilistic base station sleeping under a separation network architecture , '' in _ 2014 ieee global communications conf .( globecom ) _ , dec .
|
the radio access networks ( rans ) need to support massive and diverse data traffic with limited spectrum and energy . to cope with this challenge , software - defined radio access network ( sdran ) architectures have been proposed to renovate the rans . however , current researches lack the design and evaluation of network protocols . in this paper , we address this problem by presenting the protocol design and evaluation of hyper - cellular networks ( hycell ) , an sdran framework making base station ( bs ) operations globally resource - optimized and energy - efficient ( green ) . specifically , we first propose a separation scheme to realize the decoupled air interface in hycell . then we design a bs dispatching protocol which determines and assigns the optimal bs for serving mobile users , and a bs sleeping protocol to improve the network energy efficiency . finally , we evaluate the proposed design in our hycell testbed . our evaluation validates the feasibility of the proposed separation scheme , demonstrates the effectiveness of bs dispatching , and shows great potential in energy saving through bs sleeping control . software - defined networking , radio access network , base station sleeping .
|
multipartite entangled states are fundamental resources for quantum computation , with many mysteries yet to be understood . a particularly useful and interesting set of multipartite entangled states are the so - called graph states .these are quantum states associated with mathematical graphs , where vertices represent qubits in superposition states and edges represent the maximally entangling controlled - phase ( ) gates between them .building complex graph states is a difficult task in practice ( i.e. in experiments ) , because it requires the application of gates between arbitrary qubits ; that said , considerable strides have been made in recent years .it is nevertheless useful to consider the circumstances under which specific multipartite graph states can be constructed efficiently .a class of particularly useful graph states are quantum error - correcting codes ( qeccs ) .these are used to prevent quantum information leakage , since quantum information is generically fragile against interactions with the environment .standard qeccs can protect quantum information against an arbitrary error on a single qubit .several schemes of measurement - based quantum computation with embedded quantum error correction have been recently proposed but the structure of logical cluster states is very complex .very recently , a concatenation scheme for a single logical qubit encoded in the five - qubit qecc ( 5qecc ) has been studied in the graph - state context . while topological approaches to fault - tolerance in graph - state quantum computation yield higher error thresholds , directly encoding the quantum information in qecc graphs might turn out to be more practical experimentally if efficient methods for constructing these states can be found .we propose that multipartite graph states , which are useful for constructing logical cluster states with 5qecc , can be efficiently built by local hadamard operations from simpler graph states . in this paper, we prove that the mathematical operation called _ edge local complementation _( elc ) , which is defined by a series of _ local complementation _ ( lc ) operations on a graph , is efficiently realizable in specific graph states because it is equivalent to the action of local hadamard operations . from the mathematical point of view , lc transforms a given graph into another , with a different adjacency matrix ; in practice , local complementation of a given vertex complements the subgraph corresponding to its neighborhood . from the quantum information point of view, lc corresponds to a set of local operations on a given graph state that therefore preserves any entanglement measure , yet describes a different graph state .yet the cost of generating the new graph from a completely unentangled state would be significantly higher if the total number of edges is larger than in the original graph state .our results indicate that the apparently complex nature of multipartite 5qecc states should not in itself be an impediment to their experimental generation , because they are in fact generically simple graphs under elc .this paper is organized as follows .we introduce the graph state notation in section [ sec2 ] .the definition of edge local complementation and its equivalence to hadamard operations in graph states are discussed in section [ sec3 ] . in section [ sec4 ], we present the step - wise method of building one - dimensional ( 1d ) logical cluster states .finally , we summarize our results with future research interests .let us begin with the definition of graphs and graph operations . in graph theory ,a graph is given by vertices and edges corresponding to a linked line between two adjacent ( neighboring ) vertices .we only consider simple graphs with no self - loops and no multiple edges .if a vertex is chosen in a graph , the other vertices are represented by its neighboring vertices and outer vertices .the neighborhood of all of the vertices is defined by the adjacency matrix , an symmetric matrix with elements iff .all simple graphs correspond to a class of quantum states called _ graph states _ , in which each vertex is represented by a qubit in a superposition state and an edge corresponds to the application of a maximally entangling gate .specifically , an -qubit graph state is defined as = are the -eigenstates of ( here are the pauli operators ) , and is the controlled - phase ( controlled- ) gate acting between two qubits .graph states can be defined in at least two equivalent ways , both of which will prove useful for our purposes . because the operations can be written as where is the identity matrix applied at site , the graph state is given by a quadratic form of a boolean function where and . obviously , the value iff ( otherwise the value is 0 ) , so is a quadratic polynomial representing the graph adjacency matrix . alternatively , the state is the fixed eigenvector , with unit eigenvalue , of the independent commuting operators i.e. for all . because the generate a set of stabilizer operators , these generators uniquely define .[ fig : edgecomplement01](b ) and ( c ) shows two simple and important examples of graph states that are not equivalent under local - unitary transformations , the star and cycle graphs .the star graphs correspond to ghz states : with ; these are lc - equivalent to the complete graphs .[ fig : edgecomplement01](b ) depicts .the cycle graph state is equal to with .[ fig : edgecomplement01](c ) shows .the former is related to classically encoded graph states and the latter to 5qecc .local complementation and edge local complementation are two operations used to classify locally - equivalent graphs that are generally inequivalent under isomorphism ( vertex permutation ) . the action of local complementation at the vertex transforms the graph by replacing the subgraph associated with the neighboring vertices by its complement .the new graph generated by on is locally equivalent to the original graph .it is important to note that the operation does not affect the edges of outer vertices in the graph ; only the neighborhood of vertex is affected .the action of edge local complementation on the edge is defined by three local complementations : .the action of elc on the edge can be understood as follows .consider any pair of vertices , where is a neighbor of but not and is a neighbor of but not ( or vice versa ) ; alternatively , and can both be neighbors of and .elc then corresponds to complementing the edge between and , i.e. if then delete the edge , and add it if .in addition , the neighborhoods of and are replaced with one another .edge local complementation has been investigated for recognizing the edge locally equivalence of two graphs and for understanding the relationship between classical codes and graphs . in the context of graph states, local equivalence implies that one graph state can be transformed into another by the action of single - qubit ( i.e. local ) operations .it is well - known that two graph states that are equivalent under stochastic local operations and classical communication ( slocc ) must also be equivalent under the local unitary ( lu ) operations .a long - standing conjecture held that lu equivalence also implied equivalence under the action of clifford - group elements ( operations that map the pauli group to itself ) , though this was recently proved to be false in general .nevertheless , the transformations ( and therefore ) on graph states can be expressed solely in terms of local clifford operations : where .suppose that possesses qubit ( called a _ core _qubit ) connected to neighboring qubits .the action of corresponds to the application of operations on the graph state , creating edges between if there were none and removing them otherwise ( ) .although entanglement between the two graph states is the same due to the invariance of entanglement under local unitary operations , the number of _ effective _ operations ( i.e. the number of edges ) differs .edge local complementation on the edge would then correspond to the operation where the and are reminders that the neighborhoods themselves change under the operations . recognizing that and remain neighbors , thiscan be rewritten where is the hadamard operator on qubit .one of the goals of this manuscript is to show that the result of this operation on graph states can be expressed in the simpler form , requiring the application of far fewer local operations .simple examples of lc and elc are shown in fig .[ fig : edgecomplement01](a ) .the initial graph state consists of four qubits and three edges .after the first , because no edge exists between two neighboring qubits of in state , an edge is drawn between them .after , the edge on qubits and is deleted by a rule of the local complementation because two sequential operations become the identity between and . finally , after the last , the number of edges are four on the final graph state , which is represented by , although all four graph states are locally equivalent .consider two disconnected graphs and and their respective graph states states and ; each possesses a core vertex ( qubit ) and , respectively .a operation is then applied to the two core qubits , linking the two graph states into a single connected graph .if a hadamard operation is then applied to each core qubit , the graph is transformed into another locally equivalent graph state . below we show that the state is the edge local complement of , i.e. that .it is important to note that the equivalence of edge complementation on with the application of hadamard operations on and is only valid if , i.e. that prior to the application of , the neighborhoods of and were completely disjoint .our results do not apply to graphs where and share a neighborhood ( other than themselves ) .the main theorem of the paper is the following : consider two graph states , defined by adjacency matrices and on independent vertex sets and , respectively .if core qubits , and , are chosen at random from each of these vertex sets , and are entangled with one another by means of a gate , then is , and the ( vertex ) local complementation operator at qubit complements the edge set of its neighborhood .the core qubits and have neighborhood and ) , respectively .the remaining vertices of the graphs and are and , respectively .performing a operation between these core qubits , the graph state is consider this can be simplified by noting that for one then obtains .\label{eq : hadamardstep01 } \end{aligned}\ ] ] applying this to the remaining operators in eq .( [ eq : ghstep1 ] ) gives |x_{a_1^{(1 ) } } \cdots x_{a_{n_1}^{(1)}}\rangle & = & \frac{(-1)^{x_{c_1}x_{c_2}}}{\sqrt{2^{n_1+n_2 } } } ( -1)^{\left(q_1(x)+q_2(x)+q_3(x)\right ) } finally , one can combine all the terms to obtain where recall that edge local complementation on the edge is described by the three local complementations .suppose that the first local complementation is performed on at qubit .the result is that all neighboring qubits of are explicitly connected to each other ( adding an edge to an existing edge annihilates it ) .the additional edges are given by the quadratic form next one complements the neighborhood of qubit , which is given by the quadratic form ; the result is .the total additional edges are then given by the quadratic form last , one complements the neighborhood of qubit , which is given by the quadratic form ; the result is simply . the quadratic form for the additional edges after this final operation is combining this result with the remaining terms in the quadratic form ( [ eq : ghstep1 ] ) , the graph resulting from the edge local complementation becomes where which is identical to the quadratic form ( [ eq : quadraticfinal ] ) .( [ eq : quadraticfinal ] ) shows that when hadamard gates are applied to both ( core ) qubits of single edge between two graphs , the result is a new graph state corresponding to the effective application of controlled - phase operations .these operations have the effect of replacing the original neighborhood of each core qubit with the neighborhood of the other core qubit ( and vice versa ) , while simultaneously adding the neighborhood of a given core qubit to the neighborhood of the other .that is , from the edge set one deletes the combinations and , and adds the combinations , , and . in other words , the hadamard operations have complemented the neighborhood of the edge , or performed edge local complementation. of particular interest is the special case where both of the original graphs and were star graphs with the core qubit corresponding to the maximum - degree vertex , i.e. where and . then the resulting graph would be completely bipartite , with every vertex of the first group connected to every vertex of the second group .the above analysis proves that the application of hadamard operations to the core qubits and is equivalent to edge local complementation on the edge .it is not obvious that edge local complementation based on the formal definition of local complementation given in eq .( [ eq : lcdef ] ) , , reproduces the same result .though graph transformations effected by this expression have already been discussed in ref . in the context of vertex local complementation , edge local complementation using this operator was not explicitly explored in that work .in fact , as shown below , the application of these unitary gates in order to effect edge local complementation requires local operations in addition to the two hadamard gates .it is convenient to write /\sqrt{2};\quad \sqrt{\imath z}= [ \imath i+ z]/\sqrt{2}.\ ] ] the action of these on quadratic forms is /\sqrt{2 } ; \nonumber \\\sqrt{\imath z_{b_j}}(-1)^{x_{c_1}x_{b_j}}&= & ( -1)^{x_{c_1}x_{b_j } } \left[\imath + ( -1)^{x_{b_j } } \right]/\sqrt{2}.\end{aligned}\ ] ] suppose one has an arbitrary graph state defined by quadratic form whose neighborhood of the qubit is , i.e. where includes the term .local complementation on the vertex then yields \nonumber \\ &= & { 1 \over 2^n } \prod_{j=1}^n ( -1)^{x_{c_1}x_{b_j } } \left[\imath + ( -1)^{x_{b_j } } \right ] \left[-1+i ( -1)^{x_{b_j}}x_{c_1 } \right ] .\nonumber\end{aligned}\ ] ] when this local complementation operator is applied to the graph state , the operator will act only on its eigenstates and will effectively disappear .the effect of the various terms above is then equivalent to the new quadratic form in other words , has complemented the neighborhood of qubit , by effectively applying entangling operations to all of its neighbors .in addition , it has applied gates to all the neighbors .these are local operations that commute with the and are therefore unimportant .that said , complete equivalence ( rather than simply unitary equivalence ) under edge local complementation would then require the application of additional unitary gates beyond the two hadamard gates .we now discuss a novel and useful application of the theory of edge local complementation for quantum information processing . in previous work , we showed that logical cluster states corresponding to 5qecc can be made with logical operations consisting of many operations among the physical qubits .a linear -qubit logical cluster state is given by where is a logical operation between two logical qubits and .for with 5qecc , 25 physical operations are required to construct a logical operation from ( see fig . 3 in ref .the construction of many - qubit logical cluster states requires so many entangling operations to build logical gates as to be impractical for realistic quantum information processing . in this context, the edge local complementation provides an efficient solution to this conundrum : a single physical operation and two hadamard operations are sufficient to build a logical operation between two logical qubits .first we will review how to encode a physical qubit into a logical qubit with 5qecc .one begins begin with a qubit in state and four auxiliary qubits in .after a hadamard operation on qubit and four operations between and the others , one obtains the five - qubit ghz - type graph state ( see fig . [fig : edgecomplement01](b ) but with replaced by and replaced by ) .after an additional hadamard operation on qubit in , the state is equal to a five - qubit ghz state , the state can be understood as a classically encoded state of in five qubits ( here a classical encoding is meant to signify the implementation of a repetition code ) .similarly , if the physical qubit is initialized in , the outcome state is .the quantum encoding scheme transforms into and into . as shown in fig .[ fig : edgecomplement01](c ) , a pentagon graph operation is used for encoding logical qubits where .therefore , the total encoding operation for a logical qubit is represented by \ , h_{a_1},\end{aligned}\ ] ] and . with this toolkitone can show how to build logical cluster states .there are two different ways of building a two - qubit logical cluster state from ten physical qubits .the first method is to first prepare two logical states in , and then to directly perform a logical operation between them : because for this case , 35 physical operations in total are required to build from ( for details refer to ref . ) . in the second method ,one creates classically encoded graph states by means of edge local complementations ; the quantum encoding is then applied to the classically encoded states to obtain logical cluster states .initially , the core qubit of one classical state is entangled with its counterpart in the other state , yielding a two - qubit cluster state .note that the first hadamard operations in leave the state invariant .after the ghz - type operations are performed between ( ) and ( ) ( ) , through the operation , a connected graph state is obtained ( see fig . [fig : equ - example ] ) .when two hadamard operations are subsequently applied to and in , the resulting state is transformed to another graph state , given by .\end{aligned}\ ] ] this state is a classically encoded two - qubit cluster state . in fig .[ fig : equ - example ] , it is shown that the action of three local complementations on the core vertices and provides the desired operations among the physical qubits , reproducing the state ( [ eq : stategh ] ) .the resulting graph is known as a complete bipartite graph state : each of the vertices in one neighborhood ( corresponding to logical register a or b ) is connected with all the vertices of the other neighborhood , and vice versa . while it is possible to construct directly by applying 25 operations starting with , it can be efficiently made using only 9 operations plus two local operations . for the quantum encoding scheme ,the final state is given by therefore , the state can be efficiently built by 19 operations with the help of two hadamard operations , instead of 35 operations , and the logical operation expressed by shows that a single physical operation is sufficient to create a logical operation between logical qubits . while the encoding procedure for graph states is straightforward to implement , its interpretation in terms of edge local complementation is not obvious in general .for example , any encoding of a cluster state with an odd number of qubits is difficult to express in terms of edge local complementations , each requiring an even number of hadamard operations .the interpretation of encoding linear -qubit cluster states through edge local complementation is straightforward , however .consider for example the linear four - qubit logical cluster state .first one assigns five qubits each to registers a , b , c , and d. after assigning a core qubit from each , designated , , , and , respectively , one prepares the linear four - qubit cluster state for , where is given in eq .( [ eq : classice01 ] ) .the first step is to perform four hadamard operations on . applying two hadamard operations on qubits and , the intermediate graph state is equal to latexmath:[\[\begin{aligned } \hspace{-1 cm } |\psi_{inter } \rangle_{a_1-d_1 } = elc(a_1,b_1 ) edge local complementation .because and share an edge but their neighborhoods are disjoint , it is reasonable to associate the subsequent hadamard operations on qubits and with another edge local complementation on the edge .the resulting state is equal to another linear four - qubit cluster state , but with the vertex labels permuted : after four ghz - type operations and the second set of four hadamard operations on , again corresponding to two edge local complementations , the outcome is a linear four - qubit cluster state with classical encoding .the hadamard operations not only effect the edge local complementation ; they also reverse the permutation of the vertex labels above .finally , the quantum encoding scheme on all the qubits yields a logical four - qubit cluster state , which is sufficient for universal quantum computation with 5qecc .this procedure can be trivially extended to any even - length chain , by applying hadamard gates in pairs on nearest - neighbor edges in order to implement edge local complementations from the left boundary of the chain to the right .the main result presented in this manuscript is a proof that the action of edge local complementation on a graph state can be effected solely through the use of two hadamard operations applied to the edge qubits .a crucial assumption in this proof is that the neighborhoods of the edge qubits were disjoint , i.e. that the neighbors of the first edge qubit were different from the neighbors of the second qubit . under this restriction ,edge local complementation interchanges the respective neighborhoods , i.e. , while simultaneously making neighbors of all the neighbors . in principle , this transformation would require a large number of either local unitary operations on the graph - state qubits or entangling gates between various qubits .the distinct advantage of the present scheme is the large savings in the number of ( local ) operations required . as an example of the utility of this insight, we show how edge local complementation can be used to efficiently create classically encoded cluster states and one - dimensional logical cluster states based on the five - qubit error - correcting code , for an even number of logical qubits . in this scheme , a physical operation , together with local operations , is sufficient to create a logical operation between two logical qubits .arbitrary encoded graph states can be obtained by a straightforward extension of the procedure described above .the operations encoding a logical qubit , eq .( [ eq : classice01 ] ) , are local to the physical qubits comprising the logical qubit , and therefore commute with one another . it therefore suffices to first construct the desired graph state with the core qubits , associate four ancillae to each core qubit , and operate independently with eq .( [ eq : classice01 ] ) on each five - qubit register .multipartite entangled states that fundamentally include fault tolerance might be desirable for practical measurement - based quantum computing and multipartite quantum communication . for a generalized scheme of level- logical graph states based on our proposal ,the same encoding procedure can be used repeatedly .since the level-1 logical graph state is made by our protocol , a level- concatenated logical graph state becomes an initial state to create a level-( + 1 ) concatenated one ( acknowledgements ---------------- the authors are grateful to t. p. spiller and jinhyoung lee for stimulating discussions .this work was supported by the natural sciences and engineering research council of canada , the mathematics of information technology and complex systems quantum information processing project , and the quantum interfaces , sensors , and communication based on entanglement integrating project .9 m hein , j eisert , and h j briegel ( 2004 ) multiparty entanglement in graph states _ phys .a _ * 69 * 062311 ; m van den nest , j dehaene , and b de moor ( 2004 ) graphical description of the action of local clifford transformations on graph states _ phys . rev .* 69 * 022316 c - y lu , x - q zhou , o g " uhne , w - b gao , j zhang , z - s yuan , a goebel , t yang , and j - w pan ( 2007 ) experimental entanglement of six photons in graph states _ nat . phys . _ * 3 * 91 ; w - b gao , c - y lu , x - c yao , p xu , o g " uhne , a goebel , y - a chen , c - z peng , z - b chen , and j - w pan ( 2010 ) experimental demonstration of a hyper - entangled ten - qubit schr " odinger cat state _ nat . phys . _ * 6 * 331 r raussendorf and j harrington ( 2007 ) fault - tolerant quantum computation with high threshold in two dimensions _ phys . rev_ * 98 * 190504 ; r raussendorf , j harrington , and k goyal ( 2007 ) topological fault - tolerance in cluster state quantum computation _ new j. phys . _ * 9 * 199 a cosentino and s severini ( 2009 ) weight of quadratic forms and graph states _ phys . rev .* 80 * 052309 ; j dehaene and b de moor ( 2003 ) clifford group , stabilizer states , and linear and quadratic operations over gf(2 ) _ phys .* 68 * 042318 l e danielsen , m g parker , c riera , j g knudsen ( 2010 ) on graphs and codes preserved by edge local complementation _arxiv:1006.5802 _ ; l e danielsen and m g parker ( 2008 ) edge local complementation and equivalence of binary linear codes _ des .codes cryptogr . _ * 49 * 161 m van den nest , j dehaene , and b de moor ( 2004 ) local equivalence of stabilizer states _ proceedings of the 16th international symposium of mathematical theory of networks and systems ( mtns2004 ) _ ( belgium : katholieke universiteit leuven )
|
a method is presented for the implementation of edge local complementation in graph states , based on the application of two hadamard operations and a single controlled - phase ( cz ) gate . as an application , we demonstrate an efficient scheme to construct a one - dimensional logical cluster state based on the five - qubit quantum error - correcting code , using a sequence of edge local complementations . a single physical cz operation , together with local operations , is sufficient to create a logical cz operation between two logical qubits . the same construction can be used to generate any encoded graph state . this approach in concatenation may allow one to create a hierarchical quantum network for quantum information tasks .
|
quantum key distribution ( qkd ) has attracted great attention as an unconditionally secure key distribution scheme .the basic idea of qkd protocol is to exploit the quantum mechanical principle that observation in general disturbs the system being observed .thus , if there is an eavesdropper ( eve ) listening while the two legitimate communicating users , namely alice and bob , attempt to transmit their key , the presence of the eavesdropper will be visible as a disturbance of the communication channel that alice and bob are using to generate the secret key .alice and bob can then throw out the key bits established while eve was listening in , and start over .the key generation rate , which is the length of the securely sharable key per channel use , is one of the most important criteria for the efficiency of the qkd protocol . the first qkd protocol , which was proposed in 1984 , is called bb84 after its inventors ( bennet and brassard ). qkd protocol usually consists of two parts : a quantum and a classical part . in the quantum part, alice sends qubits prepared in certain states to bob .the states of these qubits are encodings of bit values randomly chosen by alice .bob performs a measurement on the qubits to decode the bit values . for each of the bits , both the encoding and decodingare chosen from a certain set of operators .after the transmission steps , alice and bob apply _ sifting _ where they publicly compare the encoding and decoding operator they have used and keep only the bit pairs for which these operators _match_. once alice and bob have correlated bit strings , they proceed with the classical part of the protocol . in a first step , called _parameter estimation _, they compare the bit values for randomly chosen samples from their strings to estimate the quantum channel .after the parameter estimation , alice and bob proceed with a classical processing , where alice and bob share a secret key based on their bit sequences obtained in the quantum part .mathematically , quantum channels are described by trace preserving completely positive ( tpcp ) maps .conventionally , in the bb84 protocol , we only used the statistics of the matched measurement outcomes , which are transmitted and received in the same basis , to estimate the tpcp map that describes the quantum channel , while the mismatched measurement outcomes , which are transmitted and received in different bases , were discarded .however , watanabe _ et al . _ showed that by using the statistics of _ both _ matched and mismatched measurement outcomes , the tpcp maps describing the quantum channel can be estimated more accurately .they implemented a practical classical processing for the six - state and bb84 protocols that utilizes their accurate channel estimation method and showed that the key rates obtained with their method were at least as high as the key rates obtained with the standard processing by shor and preskill . in the bb84 protocol , alice creates random bits of 0 and 1 with equal probability .then , alice and bob each chooses between the two bases , i.e. the rectilinear basis ( or basis ) of vertical ( ) and horizontal ( ) polarizations , and the diagonal basis ( or basis ) of and polarizations , with equal probability . proposed a simple modification of the standard bb84 protocol by assigning significantly different probabilities to the different polarization bases during both transmission and reception .they showed that the modification could reduce the fraction of mismatched measurement outcomes , thus nearly doubles the efficiency of the bb84 protocol . in this paper, we propose a modification of the bb84 protocol by assigning a different transmission probability to each transmitted qubit _ within _ a single polarization basis . while in classical information , assignment of different probability to each input bit can increase the mutual information of asymmetric channels (* problem 7.8 ) , in quantum key distribution the benefit of assigning a different transmission probability to each transmitted qubit was unknown .we show that by setting a different transmission probability to each transmitted qubit , we can improve the key rate and achieve a higher key rate .we demonstrate this fact by using the accurate channel estimation over the amplitude damping channel .we determine the optimum bit transmission probability that maximizes the key rate .in this section , we describe a modification of the bb84 protocol where the transmission probability of each qubit within a single polarization basis is not necessarily equal .the protocol consists of a quantum and a classical part .the quantum part includes the distribution and measurement of quantum information , and is determined by the operators that alice and bob use for their encoding and decoding . for simplicity, we assume that eve s attack is the collective attack , i.e. the channel connecting alice and bob is given by tensor products of a channel from a qubit density matrix to itself . as is usual in a lot of qkd literature, we assume that eve can access all the environment of channel .the channel to the environment is denoted by . in the modified bb84 protocol ,alice chooses random bits of 0 and 1 according to the probability distribution alice modulates each bit into a transmission basis that is randomly chosen from the basis and the basis , where and are the eigenstates of the pauli matrix for .we occasionaly omit the subscripts of the basis , and the basis is regarded as basis unless otherwise stated .then bob randomly chooses one of the measurement observables for , and converts a measurement result or into a bit or , respectively .note that alice and bob keep the the mismatched measurement outcomes to estimate the channel more accurately . the classical part of the protocol that we consider is essentially the same as the classical part of the protocol proposed by watanabe _ et al . _ .however , since we assign a different transmission probability to each transmitted qubit with a single polarization basis ( see eq .( [ probx ] ) ) , some adjustments need to be made accordingly .the classical part of our protocol consists of two subprotocols , called _parameter estimation _ and _ classical post - processing_. the main purpose of the parameter estimation subprotocol is to estimate the amount of information gained by the eavesdropper eve during the distribution of the quantum information .after the parameter estimation , alice and bob proceed with a classical subprotocol .hereafter , we treat only alice s bit sequence that is transmitted in basis and the corresponding bob s bit sequence that is received in measurement , where is the finite field of order 2 .our goal is to generate a secure key pair , using and . herealice and bob want to generate a key pair which is statistically independent of eve s information by cloning the quantum objects and looking at the conversation over the public authenticated channel .the protocol we consider is _ one - way _, i.e. only communication from alice to bob or from bob to alice , is needed .it consists of the following steps : 1 ._ information reconciliation _ :alice sends error correction information to bob . using the correction information ,bob decode the bit string into an estimate of .privacy amplification _ :alice randomly chooses a hash function from a set of universal hash functions and sends the choice to bob over the public channel .then , alice and bob compute and , respectively. the above procedure is usually called _ the direct reconciliation_. the procedure in which the roles of alice and bob are switched is called _ the reverse reconciliation _ .since the pair of the sequences is transmitted and received in basis , they are independently identically distributed according to note that the distribution can be estimated from the statistics of the sample bits that are transmitted in basis and measured by the observable .the secure key rate is determined according to the result of the privacy amplification . for the direct reconciliation ,let be the conditional von neumann entropy with respect to density matrix where is the von neumann entropy for a density matrix and is the probability distribution shown in eq .( [ probx ] ) . the secure key rate is while for the reverse reconciliation , we can calculate the conditional von neumann entropy from the channel as follows .we define for the entangled state let be a purification of , and let $ ]. then the density matrix is derived by measurement on bob s system , i.e. for the reverse reconciliation , the secure key rate is the stokes parametrization , the qubit channel can be described by the affine map parametrized by 12 real parameters as follows : \mapsto \left [ \begin{array}{ccc } r_{zz } & r_{zx } & r_{zy } \\r_{xz } & r_{xx } & r_{xy } \\r_{yz } & r_{yx } & r_{yy } \end{array } \right ] \left [ \begin{array}{c } \theta_{z } \\ \theta_{x } \\ \theta_{y } \end{array } \right]+ \left [ \begin{array}{c } t_{z } \\ t_{x } \\ t_{y } \end{array } \right ] , \label{channelparam}\ ] ] where describes a vector in the bloch sphere . when alice and bob use only basis and basis , the statistics of the input and output are irrelevant to the parameters in eq .( [ channelparam ] ) .thus we can only estimate the parameters by the accurate channel estimation and we have to consider the worst case for the parameters , i.e. where is the set of all parameters such that the parameters and constitute a qubit channel , and is the density matrix which corresponds to the parameter .we can simplify the form of the desired function when eve s ambiguity is convex .we can prove the convexity of eve s ambiguity with respect to in our protocol by using the same technique used by watanabe _* lemma 2 ) . by the convexity of eve s ambiguity , the minimization in eq .( [ fomega1 ] ) is achieved when the parameters , , , , and , are all ( * ? ? ?* proposition 1 ) .hence , the number of free parameters can be reduced to 1 and the remaining free parameter is .thus the the problem is rewritten as looking for an estimator of where is the set of parameters such that the parameters and consitute a qubit channel when other parameters are all , and is the density matrix corresponding to the parameter . in this section , we calculate the key rates of the bb84 protocol with our proposed procedure over the amplitude damping channel , and determine the optimum bit transmission probability that maximizes the key generation rate .we clarify the fact that the key rates using the optimum bit transmission probability of the proposed bb84 protocol is higher than those of the conventional protocol . in the stokes parametrization , the amplitude damping channel given by the affine map \mapsto \left[\begin{array}{ccc } \hspace{-1mm}1-p\hspace{-1mm}&\hspace{-1mm}0\hspace{-1mm}&\hspace{-1mm}0\hspace{-1mm}\\ \hspace{-1mm}0\hspace{-1mm}&\hspace{-1mm}\sqrt{1-p}\hspace{-1mm}&\hspace{-1mm}0\hspace{-1mm}\\ \hspace{-1mm}0\hspace{-1mm}&\hspace{-1mm}0\hspace{-1mm}&\hspace{-1mm}\sqrt{1-p}\hspace{-1mm}\end{array}\right ] \left [ \begin{array}{c } \theta_{z } \\ \theta_{x } \\ \theta_{y } \end{array } \right]+ \left [ \begin{array}{c } p \\ 0 \\ 0 \end{array } \right],\ ] ] where . in the bb84 protocol, we can estimate the parameters , , , , , and . as explained in the previous section, we can set .furthermore , by the condition on the tpcp map we can decide the remaining parameter as . by straightforward calculation, the asymptotic key generation rates for the direct and reverse reconciliations are and respectively , where is the binary entropy function . from eqs .( [ ratedirect ] ) and ( [ ratereverse ] ) , we can easily see for , the asymptotic key generation rates for both direct and reverse reconciliations reach the maximum value when .please recall that is the bit transmission probability of bit 0 ( see eq .( [ probx ] ) ) .we can derive the optimum bit transmission probability by the extreme value theorem .let be the optimum bit transmission probability , i.e. the bit transmission probability ( of bit 0 ) that maximizes the key generation rate such that the key generation rate is positive .then the channel parameter and the optimum bit transmission probability satisfy the following condition : * for direct reconciliation where and . * for reverse reconciliation where and . of the amplitude damping channel . proposed reverse " and proposed direct " are the maximum asymptotic key generation rates for the reverse and direct reconciliations with the optimum bit transmission probability , respectively . while conventional reverse " and conventional direct " are the asymptotic key generation rates for the reverse and direct reconciliations when , respectively given in . ] the key rates for the direct and reverse reconciliations using the optimum bit transmission probability are plotted in fig . [ comparison ] .we find that the proposed key rates , i.e. the key rates when , are higher than the conventional ones , i.e. the key rates when , in both the direct and reverse reconciliations .in the direct reconciliation , the proposed key rate is slightly higher than that of the conventional one so that the lines of the two key rates seem to overlap one another . while in contrast , in the reverse reconciliation , the proposed key rate grows much higher than the conventional one as the parameter increases . and especially when the parameter , we can see that the proposed key rate is more than twice as high as the conventional one .in this paper , we proposed a simple modification of the bb84 protocol where the transmission probability of each qubit within a single polarization basis is not necessarily equal .we showed that by assigning a different transmission probability to each transmitted qubit , we can generally increase the key generation rate of the bb84 protocol .we demonstrated this by using the accurate channel estimation over the amplitude damping channel .we determined the optimum bit transmission probability that maximizes the key generation rate .we showed that in general , assignment of an equal probability to each qubit within a single polarization basis is not necessarily optimal in qkd protocol .we would like to thank dr .shun watanabe for valuable discussions .
|
in all papers on the bb84 protocol , the transmission probability of each bit value is usually set to be equal . in this paper , we show that by assigning different transmission probability to each transmitted qubit within a single polarization basis , we can generally improve the key generation rate of the bb84 protocol and achieve a higher key rate .
|
massless triangle feynman diagrams occur in field theoretical perturbative calculations , being one of the primary divergent ( one - particle irreducible ) graphs for the non - abelian gauge theories such as yang - mills fields .this makes this kind of perturbative calculation the more interesting and necessary once we want to check the higher order quantum corrections for many physical processes of interest .two decades ago , boos and davydychev were the first to obtain the analytic formulae for the one - loop massless vertex diagram using the mellin - barnes complex contour integral representation for the propagators .later on , suzuki , santos and schmidt reproduced the same result making use of the negative dimensional integration method ( ndim ) technique of halliday and ricotta for feynman integrals , where propagators are initially taken to be finite polynomials in the integrand , and once the result is obtained , analytically continued to the realm of positive dimensions .these two widely different methods of calculation yielding the same answer lends to the analytic result obtained a degree of certainty as far as the mathematical soundness is concerned .however , here i argue that mathematical soundness only is not enough when we are interested in processes of physical content .neither the mellin - barnes formulation of boos and davydychev nor the ndim technique employed by suzuki _ et al _ to evaluate the massless triangle graph take into account the basic physical constraint imposed on such a diagram due to the momentum conservation flowing in the three legs .momentum ( that is , energy and three - momentum ) conservation is one of the basic , fundamental tenets of modern natural sciences , and plays a key role in determining the final form for the analytic result for the massles one - loop vertex correction . the system of coupled , simultaneous partial differential equations : \frac{\partial z}{\partial x } - ( \alpha+\beta+1)y\frac{\partial z}{\partial y}-\alpha \beta z = 0 \nonumber \\y(1-y)\frac{\partial^2 z}{\partial y^2}-x^2\frac{\partial^2 z}{\partial x^2}-2xy\frac{\partial^2 z}{\partial x \partial y } + [ \gamma\hspace{.05cm}'-(\alpha+\beta+1)y]\frac{\partial z}{\partial y } - ( \alpha+\beta+1)x\frac{\partial z}{\partial x}-\alpha \beta z = 0 \end{aligned}\ ] ] has a solution which consists of a linear combination of four hypergeometric functions of type , as follows : where it is implicit that and are independent variables . now the analytic solution for the massless triangle feynman diagram given by boos and davydychev and suzuki _ et al _ has exactly the same structure as the above solution for the simultaneous partial differential equation. however , the variables and for the triangle feynman diagram are given by ratios of momentum squared , such as and where , so that the variables and are not independent , but constrained by momentum conservation .therefore , the solution for the feynman diagram can not be a linear combination of four linearly independent hypergeometric functions , but a linear combination of _ three _ linearly independent hypergeometric solutions .one of the simplest examples where the four - term hypergeometric combination above mentioned does not reproduce the correct result is when it is embedded in a higher two - loop order calculation .consider , for example , the diagrams of figure 1 . for neither of the diagrams the four - term solution for the triangle when embedded into the two - loop graph yields the correct answer .the right result can only be achieved when we use a three - term triangle solution embedded into the two - loop structure .one could use the analytic continuation formula for the function in order to reduce the number of independent functions to three ; however , one has at least two possibilities to do this : either combining ( [ a ] ) and ( [ c ] ) or combining ( [ b ] ) and ( [ d1 ] ) , so that , for example , for the former case therefore , which three of those s should be considered can not be determined by the momentum conservation only ; it needs another physical input to this end . in order to determine the exact structure and content of the analytic result for the triangle diagram i employ the analogy that exists between feynman diagrams and electric circuit networks .the conclusion of the matter can be summarized as follows : since there is a constraint ( call it either a initial condition or a boundary condition ) to the momenta flowing in the legs of a triangle diagram , it means that the overall analytic answer obtained via mellin - barnes or ndim technique , which comes as a sum of four linearly independent hypergeometric functions of type , in fact must contain only three linearly independent ones . which three of these should be is determined by the momentum conservation flowing through the legs of the diagram , _ and _ , by the circuit analogy , by the equivalence between the `` y - type '' and `` -type '' resistance networks through which _ conserved _ electric current flows .although known for a long time and mentioned sometimes in the literature of field theory , the analogy between electric circuit networks and feynman diagrams , more often than not stays as a mere curiosity or at a diagrammatic level in which the drawing is only a representation that helps us out in the `` visualization '' of a given physical process .there is , however , a few exceptions to this . the earliest one that i know of is by mathews as early as 1958 where he deals with singularities of green s functions and another one by wu in 1961 , where he gave an algebraic proof for several properties of normal thresholds ( singularities of the scattering amplitudes ) in perturbation theorywhat i am going to do here is just to state the well - known result for the equivalence between `` ''- and `` -circuits '' which is used , for example , in analysing the `` kelvin bridge '' in relation to the more commom `` wheatstone brigde '' .the equivalence i consider is for the network of resistors , where energy is dissipated via joule effect .then with this in hands i argue which three out of the overall four hypergeometric functions present in the one - loop vertex obtained via mellin - barnes or via ndim should be considered as physically meaningful .consider then the `` ''- and `` -circuits '' of figure 2 .these two network of resistors are related to each other ; for example , the `` delta - circuit '' can be made equivalent to the `` y - circuit '' when resistors obey the following equations : in electric circuits a general theorem guarantees that the current density in a conductor distributes itself in such a way that the generation of heat is a minimum . strictly speaking , since i consider the `` y''- and `` delta - networks '' by themselves and not embedded within an electrical circuit , i can not consider the minimum of power generation , but still i can consider the equivalence between the power generated in one of them say `` y - circuit '' as compared to the power generated in the other one , the `` delta - circuit '' . labelling the currents that flow through resistors , and of the `` y - circuit '' by , and , respectively , by reason of current conservation we have that , for example each of these currents will generate heat according to joule s law and the corresponding power will be given by ohm s law .so in each leg , the product of the resistance times the square of the current will give the power generated by the current flowing through it .the overall power generated in the `` y - network '' is therefore : from this we have the following system of equations : where for convenience i have defined , , and . multiplying the last equation by and comparing with ( [ d ] ) , it follows that the coefficient of is . multiplying ( [ d ] ) by and comparing with ( [ e ] ) , it follows that the coefficient of is .finally , multiplying ( [ f ] ) by and comparing with ( [ d ] ) , it follows that the coefficient of is . on the other hand ,in the `` delta - network '' , let label the current flowing through resistance , so that the power generated in this resistor is , using ( [ ra ] ) now , as the resistances follow the equivalence defined by ( [ ra ] ) , the current that flows through must be such that : where and therefore , the solution for the one - loop triangle feynman diagram must be such that it contains three terms which are proportional to `` currents '' , and .this solution is exactly achieved when i combine ( [ b ] ) and ( [ d1 ] ) in a similar way as was done in ( [ anacont ] ) . combination of ( [ a ] ) and ( [ c ] ) as in ( [ anacont ] ) leaves the term ( [ d1 ] ) in the solution , which is proportional to , and therefore not suitable since it violates the equivalence above demonstrated .explicit analytic solution for the one - loop massless triangle feynman diagram reads therefore where with being the dimensional regularization parameter .one notes that the variables for the last , though related to those that appear in the first and second s , differ from them .using the basic physical principle of momentum conservation i deduced the correct analytic result for the one - loop massless triangle feynman diagram in terms of three linearly independent hypergeometric functions .the reason why there must be only three linearly independent functions in the solution is due to the fact that the variables and are such that implicit in them is a momentum conservation constraint that must be taken into account properly .the set of which three linearly independent functions should be is given by another physical input , deduced in analogy to the equivalence between `` y''- and `` delta - network '' of electrical resistance circuits .with the conservation of momentum correctly taken into account , the final result for the mentioned feynman diagram is the physically correct and relevant analytic solution to the problem , having only a combination of three linearly independent functions in it .e.e.boos and a.i.davydychev , _ vestnik mgu _ * 28 * , ( 1987 ) 8 . cited by v.a.smirnov in his book renormalization and asymptotic expansions , page 124 .volume 14 of the series progress in physics , birkhuser verlag , basel , boston , berlim ( 1991 )
|
different mathematical methods have been applied to obtain the analytic result for the massless triangle feynman diagram yielding a sum of four linearly independent hypergeometric functions . in this paper i work out the diagram and show that that result , though mathematically sound , is not physically correct , because it misses a fundamental physical constraint imposed by the conservation of momentum , which should reduce by one the total number of linearly independent ( l.i . ) functions in the overall solution . taking into account that the momenta flowing along the three legs of the diagram are constrained by momentum conservation , the number of overall l.i . functions that enter the most general solution must reduce accordingly . to determine the exact structure and content of the analytic solution for the three - point function , i use the analogy that exists between feynman diagrams and electric circuit networks , in which the electric current flowing in the network plays the role of the momentum flowing in the lines of a feynman diagram . this analogy is employed to define exactly which three out of the four hypergeometric functions are relevant to the analytic solution for the feynman diagram . the analogy is built based on the equivalence between electric resistance circuit networks of type `` y '' and `` delta '' in which flows a conserved current . the equivalence is established via the theorem of minimum energy dissipation within circuits having these structures .
|
it is well known that the day - to - night electricity usage is oscillatory , with a usage valley appearing through the night and a peak occurring during the day . at the same time ,high - frequency ( minute - to - minute and faster ) oscillation results from randomly occurring aggregations of individual loads with short duty cycle .the importance of reducing high - frequency peaks in usage is multi - fold .we can more easily maintain the stability of the grid with reduced amounts of generation reserves such that the grid frequency and voltage are stable .generation cost can be reduced since we will not use generators with large marginal costs . among all classes of electricity demand ,thermostatic loads have been a major contributor to problems of high peak usage . at the same time , thermostatic loads provide thermal capacity such that we can regulate their usage pattern as long as certain baseline thermal requirements are met .this paper presents an approach to carrying out such regulation by means of a novel information - based method for _ direct load control_. historically , thermostatic loads ( air conditioners , electric space heating systems , water heaters , etc . ) have been operated in an uncoordinated fashion resulting in the power grid being exposed to costly random load fluctuations .taking note of the past decades s development of networked control system technologies and novel concepts enabled by smart appliances , such as the so - called _ internet of things _ , we shall study the control of a local network of loads wherein the objective of control is to suppress spikes and fluctuations in usage .the approach uses real - time data from individual devices and local temperature sensors communicating with a central operator who distributes quantized amounts of energy to service the load demands according to a protocol for _ direct load control _ ( dlc ) that we shall describe below .various approaches have been proposed to formulate the dlc problem with the objective of peak load management .the load curve has been studied using a state - queueing model where thermal set points are adjusted automatically as a function of electricity price or outside temperature in and .dynamic programming has been applied in to minimize the production cost in a unit commitment problem , and in to minimize the disutility of consumers resulting from dlc disruption .monte carlo simulation has been applied in to evaluate the effectiveness of a specific dlc approach which minimized the discomfort of overall temperature deviation subject to constraints in transmission lines .multi - server queueing theory has been used to calculate the mean waiting time of consumers when the usage authorization is limited during peak hours in .this system has been applied in a total of 449 residential units located in seoul with good performance .the objective of our approach is to monitor and control aggregate electricity use in order to avoid random spikes in demand that would otherwise occur .the mechanism that implements the approach is something that we call _ packetized _ direct load control ( pdlc ) .the term _ packetized _ refers to the idea of _ time - packetized _ energy where the central operator authorizes electricity usage of individual loads for a fixed amount of time .after the elapse of time , the central operator reschedules the authorization . for each building , the central operator is connected to the on / off switch of thermostatic loads ( fan coils or room air conditioners ) .users in the building are assumed to authorize the operator to control the on / off switch of their thermostatic smart appliances once they provide the operator their preferred temperature set point .the central operator , who receives thermal information on all the appliances at each decision instant , has the objective to maintain all appliances within their comfort band by selectively turning on or off these thermostatic loads at discrete time instants .the pdlc provides flexibility in adjusting building consumption since we are actually dealing with a discrete time decision making problem where the central operator schedules packets at the beginning of each interval .it will be shown that given a minimum critical level of energy capacity , it is possible to both eliminate demand peaks and guarantee a narrow comfort band around each consumer s preferred temperature setting . in a theoretical sense , it is further shown that the width of the comfort band can be made to approach zero by letting the packet length approach zero , although practically speaking the cycle time of an air conditioning unit can not be made arbitrarily short . in the end ,the pdlc solution is able to smooth the consumption oscillations , and this in turn enables buildings to consume smaller amounts of reserves dispatched from the iso .the remainder of the paper is organized as follows .section [ two ] introduces the set up of the pdlc mechanism , followed by the investigation of a thermal model in section [ three ] .section [ four ] and [ five ] discuss the transient and steady state operation of the pdlc solution respectively .section [ link section ] discusses an allocation solution to link theorem 1 and theorem 2 .a robustness analysis is given in section [ six ] .section [ seven ] provides simulation results .section [ eight ] concludes the paper and proposes future work .this section describes the model in terms of which the pdlc mechanism is proposed .the following few points compose the background of the proposed approach .\(1 ) the pdlc controls the thermostatic loads in a building , such as air conditioners , refrigerators , and water heaters .the thermal dynamic model of these appliances does not differ much ; the investigation of the thermal model of air conditioners in the next section can be extended to other thermostatic loads with minor change .\(2 ) the pdlc is assumed to be an on / off control .all the appliances are assumed to running with rated power if packet is authorized , or consume nothing if packet authorization is denied .there is no intermediate operation choice .\(3 ) different feeders are in charge of different types of loads , and they are all connected to the central operator who schedules electricity packets .the loads that have been grouped together in the same feeder consume energy at the same rates when they are operating .the overall consumption of the building is the sum of the consumption in each feeder controlled by the pdlc mechanism plus a certain portion of uncontrollable loads , including lighting and plug - in devices such as computers , televisions , and other small appliances .we assume that the consumption of the uncontrollable loads is independent of thermostatic loads and the environments ( temperature , humidity , etc . ) , and these uncontrollable loads are subtracted from the analysis of the pdlc framework .\(4 ) it is assumed that the target level of consumption in each feeder of thermostatic load is available beforehand , which is defined as the average consumption during peak time when no control is applied .the value of the proposed method rests on the evidence - based assumption that the consumption curve without control would oscillate around the target level , and consumption peaks will frequently exceed the target level by a significant amount .the control objective of the pdlc is to make the consumption curve smoother around the target level with minimum oscillation .a model of the thermal dynamics of an air conditioner is developed as follows .ihara and schweppe presented a dynamic model for the temperature of a house regulated by air conditioning , and this has been shown to capture the behaviour of air conditioner loads accurately .the temperature dynamics in continuous time ( ct ) is given by where is the outside temperate , is the temperature gain of air conditioner if it is on , is the effective thermal time constant of the room , and is binary valued specifying the state of thermostat .the unit of parameters is fahrenheit as in the original paper .the temperature dynamic model in discrete time ( dt ) with interval is given by where , = , and is s value during the -th interval .we first derive the duty cycle off - time and on - time based on the ct model for the case in which there is no pdlc and the air conditioner is operating in the traditional way under the control of its own thermostat . and are the comfort band boundaries . to get , we set in ( [ ct ] ) , which means that the air conditioner is turned off .rearranging terms we have whose general solution is given by since is the time that temperature arises from to in the case of traditional thermostat control , we choose initial condition to solve for .see fig.[duty ] .the overall solution of temperature evolution is given then by the value of would satisfy .after calculation we will have similarly we calculate when , the traditional duty cycle dynamics characterized by and provide the baseline against which the pdlc protocol of the next section is evaluated .to evaluate the pdlc solution , we consider its transient and steady state operation .the next section will discuss its transient operation .the motivation of the pdlc solution is to allow buildings consume electricity at a level that minimizes oscillation close to a target .denote the total number of consumers by , the number of authorized packets by , the set point in room by , and the room temperature in room at time by .the transient process is defined as the duration before the average room temperature converges to the average room set point .the theorem below provides a solution that guarantees the convergence of average room temperature under the assumption that packets are being allocated to a pool of appliances during each packet interval .* theorem 1 . *if the fixed number of packets is used in each time interval , then the average room temperature converges to the average room set point . _proof : _ we use the dt model to derive the convergence of the average room temperature . according to ( [ dt ] ), we can represent the number of authorized packets in terms of the dt model parameters and as follows in one packet length , the total temperature decrease by packets is given by where the last equality follows from ( [ mm ] ) .similarly , the total temperature increase , which is caused by indoor / outdoor temperature difference , is given by the total temperature change is given by can be expressed recursively as we will have the difference between and average room temperature at time given by for any small deviation from , we will have after steps , with satisfying this means the average room temperature will converge to an arbitrarily small neighbourhood of after finite number of steps. we say that the system is in _ steady state thermal equilibrium _ ( sste ) when the average room temperature is within a sufficiently small neighbourhood of .if the system is in sste at time , then the system will be in sste for as long as we provide packets at each interval . according to ( [ step1 ] ) , the convergence speed depends on and . if these two parameters do not provide a quick convergence with few steps ( being large in a warm load pick up process ) , we can adjust the number of packets as a function of the average temperature deviation at time .let the modified number of packets be given by ,\ ] ] where is a non - negative coefficient . in this case , for any we can similarly prove that after steps the deviation of average room temperature from is smaller than , with satisfying \ln\frac{|t_{set}^{ave}-t_{0}^{ave}|}{\epsilon},\ ] ] where can be understood as the convergence gain parameter . comparing ( [ step2 ] ) with ( [ step1 ] ) , we have for the same since .the larger the value of ( or ) , the quicker the convergence .if in ( [ revise number packet ] ) is not an integer , we can choose the ceil as the number of packets scheduled .the proof remains valid under this choice .theorem 1 indicates that the average consumption is proportional to the total population by the coefficient .the physical meaning of this coefficient is the thermostat mean status .define representing the mean on - status and off - status of the thermostat .these two variables will be used in the second theorem for the steady state analysis of the pdlc . notethat an essential implicit assumption is that , i.e. there is enough cooling capacity to serve the consumer population .when no control is applied , each air conditioner will operate according to its own duty cycle as described in sec.[three ] .all the room temperatures are controlled around their respective set points , and the average room temperature is approximately equal to the average room set point , namely . from the first theorem, the system will evolve into sste within a few steps when the pdlc is applied .we say that the system is in _ steady state _ at time if it is in sste and for all .when the pdlc solution is applied in steady state , consumers in each room have the freedom to choose the set point to be whatever they want .after the set point is given , the operator will choose the comfort band for consumers around their preferred setting . the comfort band may be large or small , depending on the outside temperature and the energy we have purchased a day ahead .it is a compromise in the pdlc that consumers allow the operator to calculate the comfort band in order to achieve a smoother consumption .denote the comfort band for room around by ( being a fixed value , namely we provide a fixed - valued comfort band for all the consumers ) .define as the critical temperature point of room .the physical meaning of is the following : if room s temperature exceeds at time , then it needs packet at time . otherwise its room temperature will exceed at time following two lemmas provide restrictions on how we choose and .the first lemma provides a condition that the temperature of room will not exceed for all , and the second lemma provides a condition that the temperature of room will not go below for all .* lemma 1 .* assuming the system is in sste , and for all at time , if we provide packets , and and have been chosen to satisfy then there exists such that for all with any packet length ._ proof : _ if , then we have at least rooms with temperature beyond their critical point at time .enumerate the ( or more ) consumers whose room temperature at time : .the remaining ( or fewer ) rooms temperature are greater than for .the average room temperature _ lower bound _ at time is given by .\ ] ] we have \\ & = & \frac{1}{n_{c}}[\sum_{i_{j}\in\textit{s}}\frac{t_{max}^{i_{j}}-at_{out}}{1-a}-\sum_{i_{j}\in\textit{s}}t_{set}^{i_{j } } \\ & & -(n_{c}-m-1)\delta_{2 } ] \\ & \propto & [ \sum_{i_{j}\in\textit{s}}t_{max}^{i_{j}}-(m+1)t_{out}]- \\ & & [ \sum_{i_{j}\in\textit{s}}t_{min}^{i_{j}}+n_{c}\delta_{2}-(m+1)t_{out}]e^{-\frac{\delta t}{\tau}}. \end{array}\ ] ] the first equality is derived from , namely at time in sste the average room temperature is equal to the average temperature set point . the second equality is derived from and ( [ cri ] ) .the last proportionality is derived by plugging from ( [ dt ] ) .+ if we choose and to satisfy ( [ low1 ] ) , then note that the above inequality is strict , so there exists such that letting in ( [ low3 ] ) , we have . since ( [ low3 ] ) is monotonically decreasing as a function of , then for packet length we will have . namely the average room temperature lower bound is greater than the average room temperature , which is a contradiction .we must have for all . * lemma 2 . * assuming the system is in sste , and for all at time , if we provide packets , and and have been chosen to satisfy then there exists such that for all with packet length . _proof : _ the proof is similar to lemma 1 .we first assume that , then derive a average temperature upper bound at time which is smaller than to show contradiction .we omit the details . based on the above two lemmas , we provide the following theorem for the steady state operation of the pdlc . * theorem 2 . * assuming that the system is in sste at time , and for all , if we provide number of packets over time and choose such that then for all and with packet length .proof : _ clearly ( [ theo1 ] ) satisfies ( [ low1 ] ) and ( [ high1 ] ) , and with packet length both lemma 1 and lemma 2 will stand .we will have for all . since we provide packets at time , the system is also in sste at time . by mathematical inductionwe can prove that for all and . * remark 1 .* as the comfort band , we have .according to ( [ pack ] ) we must have , which means we switch packets at increasingly large frequencies . in this case, individual room temperatures will stay at individual room set points after time once at time for all .this means that the width of the temperature band can be made to approach zero by letting the packet length approach zero . in actual implementation, there are practical limits on the minimum acceptable value of , say 30 seconds or 1 minute , since the air conditioning unit can not be switched on and off at an arbitrary frequency . hence , convergence is to the comfort band and not to the actual set point .* remark 2 . * from ( [ theo1 ] ) , . when , we have .this can be explained by the intuition that since we are providing packets to more than a half number of consumers ( ) , it is more likely to have consumers being over - cooled. thus we set a larger value of to avoid such an occurrence .similarly when , we set a larger value of to avoid consumers being over - warmed . * remark 3 . *based on the weather prediction , the building would purchase certain amount of packets a day ahead . in real time , the number of packets may not be enough if the predicted temperature is lower than what is actually realized . with the pdlc solution, the operator does not need to purchase additional energy from the real time market when the price is high .the operator can make packets switch more frequently to guarantee temperature control . in such cases , the average room temperature will converge to another value within the comfort band . ** the packet length above is a theoretical value to guarantee temperature control in steady state . in the proof we focus on the worst case when initially at time the temperatures of many rooms are in the vicinity of their maximum or minimum comfort boundary . in practice, the initial temperatures will be distributed more evenly across the comfort band . in such cases ,the practical packet length can be larger than the theoretical value .* remark 5 . * in our modelwe assume that the operator achieves all the temperature information within the building , and such information is continuous . in a companion technical report , we assume that the operator must act on more restricted information . in this model ,the appliance pool operator does not have complete and continuous access to appliance information , but instead receives requests for electricity that appliances send based on their own sensor readings .the operator receives packet request ( withdrawal ) from room when its room temperature reaches ( ) .the total number of available packets is limited which is equal to the expected average consumption .packet supply is modelled as a multi - server queuing system with fixed service time ( packet length ) . in a stochastic simulation , at certain timesconsumers have to wait to be served , and at other times the total number of packets can not be fully used , see fig.[binary ] .this indicates that continuous temperature information and control by an appliance pool operator results in a better control solution than binary information .the final question is how we start from sste and find a packet allocation mechanism such that at time we can start at for all . according to the discrete time thermal dynamics , where the third equality and fourth approximation are by taylor series expansion for small packet length . by a similar derivation we have , where the second equality is obtained by plugging into ( [ relation_k_kp1 ] ) and ignoring terms of for small . for intervals , we have , denote as the number of packets received within periods , then the temperature at time is given by , having discussed the discrete time temperature evolution , we propose the following theorem to guarantee that if we start from sste , then there exists a packet allocation solution to satisfy the assumptions in theorem 2 .* theorem 3 . *if the aggregate system is in sste at time ( per the conclusion of theorem 1 ) , let denote the number of packets received by room over the next successive time intervals of length .there exists a choice of packet allocation such that each room temperature is within the consumer s designated comfort band at time .that is , , with the total of allocated packets satisfying _ proof ._ according to ( [ n_period_n_use ] ) , after a total number of packet consumption in a successive periods starting at time , the temperature in room at time is given by , the allowable choice of such that is given by , in order to have at least one integer within the bounds above , we need to have , which can be achieved with a packet length we introduce the floor and ceil operator , .let then can be chosen from integers between and .if the following inequality holds , then there exists a choice of packet allocation such that ( [ ineq_ni ] ) holds and note that and this holds as long as we choose such that . in the derivation above , the third equality is obtained by the sste at time satisfying with similar derivation , a packet length such that will guarantee the second inequality in ( [ alpha beta requirements ] ) . to summarize , a packet length satisfying will make ( [ alpha beta requirements ] ) hold .this ends the proof of theorem 3 . * remark .* according to ( [ dt limit ] ) , the upper bound of packet length is directly proportional to and inversely proportional to .the intuition is that large value of impedes and facilitates the thermal transmission , which allows larger and requires smaller packet length respectively .the remaining issue is to assign packets at each period .denote as the binary variable representing packet assignment at time for room .up to time , define as the remaining number of packet needed for room until time .a simple allocation algorithm works as follows , starting at time we allocate packets to the rooms with largest . let if packet is allocated and otherwise .use ( [ dyn_need_packets ] ) to update for all . repeating such allocation procedure until the end of interval will guarantee allocation each period .we first prove the following inequality of , we prove with induction .note that for is it apparently true . also at time , for , we proof with contradiction .if there exists a room such that and , then .it also indicates that room does not get a packet and there are at least rooms , indexed by , other than such that to get packets .then which contradicts ( [ mn1 ] ) .so we will have for and all . to show that for all .suppose that , it indicates that and room gets a packet .thus there are at most rooms , indexed by , with positive value of .then contradicting ( [ mn1 ] ) again .so we will have for and all . using mathematical induction , for all and , ( [ rangeofni ] ) holds .then for and all , we will have namely all the rooms will have received the exact packets they need and for all with packets allocation for each period .the intuition of such allocation is to provide packets to the rooms that have largest temperature deviation above their target , namely at time the rooms with largest receive packet for . to summarize , theorem 1 guarantees that the systems will evolve into sste , theorem 3 guarantees that starting from sste we have an allocation solution such that we can have within the comfort band of room for all , and theorem 2 guarantees temperature control after the allocation .the three theorems complete the overall pdlc mechanism .while ihara and schweppe s model is deterministic , we have also considered temperature disturbances to get a thermal model that reflects uncertainty . temperature disturbance in real life may come with the inaccuracy of sensors , the unpredictability of consumers , etc .the revised temperature dynamics is therefore given by where is a bounded thermal stochastic disturbance uniformly distributed between ] .when packet length is small , will approach zero , and this makes the disturbance term approach zero .then the average room temperature will still converge to . as for the steady state operation, we will have the same comfort band selection as in theorem 2 , namely with the difference in the boundary of packet length selection . in the model with disturbances, we can similarly derive the contingent packet length and as in lemma 1 and 2 .for example , the value of will satisfy compared with ( [ pack ] ) , the only difference is that the term in ( [ pack ] ) is replaced by .hence the disturbance in ( [ rct ] ) can be understood as the uncertainty introduced by the outside temperature .also , the above is smaller than the in lemma 1 .this is no surprise since the existence of uncertainty forces us to switch packets more frequently .we simulate air conditioner temperature control process to verify theoretical results .environmental parameters are .consumers preferred set point is for all .after calculation we choose .fig.3 is the process of warm load pick up .fig.[wp2 ] shows that the average room temperature converges to the set point when we applied the number of packets at time as a function of , which verifies theorem 1 .compared with fig.[wp3 ] where no control is applied , the consumption oscillation by the pdlc solution is reduced by a large amount after the system evolves into sste .the oscillation magnitude in fig.[wp3 ] continues to exist if we simulate for longer time .[ wp ] fig.4 is the steady state process where all the rooms have their initial temperatures randomly distributed within their comfort bands .we see two main advantages of our pdlc solution .first , the maximum and minimum room temperature are controlled within the comfort band in steady state , which can not be achieved without control since then the disturbance drives the temperature outside the comfort band .second , the consumption process is smoother with pdlc solution than in the stochastic uncontrolled case .[ ss ] consider the simulation of multiple appliances . the controllable thermostatic loads are air conditioners and refrigerators .we also add uncontrollable loads , such as lighting and plug - in devices .the thermal characteristics of the refrigerator is similar to the air conditioner .refrigerator parameters is given by , we choose . and are around 20 minutes according to ( [ toff ] ) and ( [ ton ] ) , which is typical duty cycle of refrigerator .we assume there are 60 refrigerators each consuming around 600 watts of power .the air conditioner consumes around each .there is also an industrial chiller that consumes with small variation in steady state , which is uniformly distributed between ] .table.1 shows the comparison result between the pdlc solution and the case when no control is applied .we find that standard deviation of consumption by the pdlc solution is nearly half of that without control .also the maximum electric usage is reduced nearly from above its average ..comparison of consumption statistics [ cols="^,^,^,^,^",options="header " , ]this paper proposes an innovative pdlc solution for demand side management .we have discussed a thermal dynamic model of typical thermostatic appliances and derived a mathematical expression of its duty cycle .three theorems are proposed to illustrate overall pdlc solution .the first theorem proves the convergence of the average room temperature to average room set point .the second theorem provides comfort band choice such that we can guarantee effective temperature control in steady state .the third theorem builds the bridge between the first two theorems .simulation shows that the pdlc solution can provide comfortable temperature control with minimum consumption oscillation , and reduce consumption peaks at the same time .future research will compare the performance of the pdlc as described here with comparable distribution control approaches using market based signaling .renewable energy sources will be included , and the dynamics of an appliance pool operator buying and selling resources under different communication protocols will be studied .s. c. lee , s. j. kim , and s. h. kim , `` demand side management with air conditioner loads based on the queuing system model '' , _ ieee trans . power syst ._ , vol . 26 , no . 2 , pp . 661 - 6682011 b. ramanathan , and v. vittal , `` a framework for evaluation of advanced direct load control with minimum disruption '' , _ ieee trans .power syst . _ , vol .23 , no . 4 , pp .1681 - 1688 , nov .2008 j. baillieul , and p. antsaklis , `` special issue on the technology of networked real - time systems '' , _ proceedings of the ieee _ , 95:1 , pp . 5 - 8 , jan .http://www.readwriteweb.com/archives/internet-of-things/ , the alexandra institute n. lu , and d. p. chassin , `` a state - queueing model of thermostatically controlled appliances '' , _ ieee trans .power syst ._ , vol . 19 , no . 3 , pp .1666 - 1673 , aug .2004 n. lu , d. p. chassin , and s. e. widergren , `` modeling uncertainties in aggregated thermostatically controlled loads using a state queueing model '' , _ ieee trans .power syst ._ , vol .20 , no . 2 , pp . 725 - 733 , may 2005 y. hsu , and c. su , `` dispatch of direct load control using dynamic programming '' , _ ieee trans .power syst ._ , vol . 6 , no . 3 , pp .1056 - 1061 , aug .1991 t. lee , h. wu , y. hsiao , p. chao , f. fang , and m. cho , `` relaxed dynamic programming for constrained economic direct loads control scheduling '' , _ international conference on intelligent systems applications to power systems _ , toki messe , niigata , pp . 1 - 6 , 2007 . s. ihara , and f. c. schweppe , physically based modeling of cold load pickup , _ ieee trans. power syst .pas-100 , pp .4142 - 4150 , sep .1981 j. cavallo , and j. mapp , targeting refrigerators for repair or replacement , _ proceedings of 2000 aceee summer study on energy efficiency in buildings _ , wisconsin , mar .2000 b. zhang , and j. baillieul , `` a noval electric packet switching framework based on queuing system analysis '' , _ technical report _ , 2011
|
electricity peaks can be harmful to grid stability and result in additional generation costs to balance supply with demand . by developing a network of smart appliances together with a quasi - decentralized control protocol , direct load control ( dlc ) provides an opportunity to reduce peak consumption by directly controlling the on / off switch of the networked appliances . this paper proposes a packetized dlc ( pdlc ) solution that is illustrated by an application to air conditioning temperature control . here the term packetized refers to a fixed time energy usage authorization . the consumers in each room choose their preferred set point , and then an operator of the local appliance pool will determine the comfort band around the set point . we use a thermal dynamic model to investigate the duty cycle of thermostatic appliances . three theorems are proposed in this paper . the first two theorems evaluate the performance of the pdlc in both transient and steady state operation . the first theorem proves that the average room temperature would converge to the average room set point with fixed number of packets applied in each discrete interval . the second theorem proves that the pdlc solution guarantees to control the temperature of all the rooms within their individual comfort bands . the third theorem proposes an allocation method to link the results in theorem 1 and assumptions in theorem 2 such that the overall pdlc solution works . the direct result of the theorems is that we can reduce the consumption oscillation that occurs when no control is applied . simulation is provided to verify theoretical results .
|
the problem of synchronization of dynamical systems witnessed a surge of interest in the last few years , primarily for finite dimensional systems .adaptive and robust control techniques were considered primarily for system with linear dynamics . a special case of nonlinear systems , the lagrangian systems which describe mobile robots and spacecraft , also considered aspects of synchronization control . for distributed parameter systems ( dps ) ,fewer results can be found . in , a system of coupled diffusion - advection pdeswas considered and conditions were provided for their synchronization . in a similar fashion considered coupled reaction diffusion systems of the fitzhugh - nagumo type and classify their stability and synchronization . in the same vein , examined coupled hyperbolic pdes and through boundary control proposed a synchronization scheme . somewhat different spin but with essentially a similar framework of coupled pdes was considered in , where an array of linearly coupled neural networks with reaction - diffusion terms and delays were considered . however , designing a synchronizing control law for uncoupled pde systems has not appeared till recently , where a special class of pdes , namely those with a riesz - spectral state operator , were considered .an unresolved problem is that of a network of uncoupled pde systems interacting via an appropriate communication topology .further , the choice and optimization of the synchronization gains has not been addressed .such an unsolved problem is being considered here .the objective of this note is to extend the use of the edge - dependent scheme to a class of dps .the proposed controllers , parameterized by the edge - dependent gains which are associated with the elements of the laplacian matrix of the graph topology , are examined in the context of optimization and adaptation .one component of the proposed linear controllers is responsible for the control objective , assumed here to be regulation .the other component , which is used for enforcing synchronization , includes the weighted pairwise state differences . when penalizing the disagreement of the networked states , one chooses the weights in proportion to their disagreement .this can be done when viewing all the networked systems collectively by optimally choosing all the weights , or by adjusting these gains adaptively .the contribution of this work is twofold : * it proposes the optimization of the synchronization gains , by considering the aggregate closed - loop systems and minimizes an appropriate measure of synchronization .additionally , it casts the control and synchronization design into an optimal control problem for the aggregate systems with an lqr cost functional . *it provides a lyapunov - based adaptation of the synchronization gains as a means of improving the synchronization amongst a class of networked distributed parameter systems described by infinite dimensional systems .the outline of the manuscript is as follows .the class of systems under consideration is presented in section [ sec2 ] .the synchronization and control design objectives are also presented in section [ sec2 ] .the main results on the choice of adaptive and constant edge - dependent synchronization gains , including well - posedness and convergence of the resulting closed - loop systems are given in section [ sec3 ] .numerical studies for both constant and adaptive gains are presented in section [ sec4 ] with conclusions following in section [ sec5 ] .we consider the following class of infinite dimensional systems with identical dynamics but with different initial conditions on the state space for .the state space is a hilbert space , . to allow for a wider class of state and possibly input and output operators , we formulate the problem in a space setting associated with a gelfand triple .let be a reflexive banach space that is densely and continuously embedded in with with the embeddings dense and continuous where denotes the continuous dual of , .the input space is a finite dimensional euclidean space of controls . in view of the above, we have that the state operator and the input operator .the _ synchronization objective _ is to choose the control signals , , so that all pairwise differences asymptotically converge ( in norm ) to zero an alternative and weaker convergence may consider _ weak synchronization _ via an appropriate measure of synchronization is the _ deviation from the mean _ which can also be viewed as the output - to - be - controlled , and which measures the disagreement of state to the average state of all agents .it is easlily observed that asymptotic norm convergence of each , to zero implies asymptotic norm convergence of all pairwise differences to zero and vice - versa .when examining the well - posedness of the systems , one must consider them collectively .this motivates the definition of the state space .the spaces and are similarly defined via and with .similarly , define the space .an undirected graph is assumed to describe the communication topology for the networked pde systems .the nodes represent the agents ( pde systems ) and the edges represent the communication links between the networked systems .the set of systems ( neighbors ) that the system is communicating with is denoted by , .the parameter space is defined as the space of ( laplacian ) matrices with the property that , , i.e. we have the space is a hilbert space with inner product with . in view of the above, the deviation from the mean can be written in terms of the aggregate state vector and the aggregate deviation from the mean as where ^{t} ] , , where denotes the -dimensional identity matrix understood in the sense of each entry being the identity operator on .similarly , denotes the -dimensional column vector of s , similarly understood in the sense of being the matrix whose entries are the identity operator on .the matrix operator corresponds to the graph laplacian matrix operator with all - to - all connectivity with . in view of this , the synchronization objective in can equivalently be stated as , and together with the control objective , assumed here to be state regulation , is combined to give rise to the design objective of the networked systems . * design objectives : * design control signals for the networked systems such that please notice that regulation of to zero ( in norm ) immediately implies synchronization , but the converse can not be guaranteed . careful examination of sheds light to this case , since the matrix operator , which corresponds to the graph laplacian with all - to - all connectivity , has a zero eigenvalue .the systems in are considered with each state available . a leaderless configuration is assumed and thus each agent will only access the states of its neighboring agents as dictated by the communication topology . a standing assumption for the systems in is now presented . [ assum1 ]consider the networked systems in .assume the following 1 .the state of each system is available to the system and also to all the other networked systems that is linked to as dictated by the communication topology , assumed here to be described by an undirected connected graph .2 . the operator generates a semigroup on and for any , the systems are well - posed for any .the pair is approximately controllable be exponentially stabilizable .when the operator generates an exponentially stable semigroup , then one only requires approximate controllability .] , i.e. there exists a feedback gain operator such that the operator , generates an exponentially stable semigroup , with the property that the operator equation is a simplified version of the operator lyapunov function . for simplicity , denote the differences of and by , , .the controllers with _ constant edge - dependent synchronization gains _ are given by whereas with _ adaptive edge - dependent synchronization gains _ are given by the control signals consist of the local controller used to achieve the control objective ( regulation ) and a networked component required for enforcing synchronization .the feedback operator is chosen so that generates an exponentially stable semigroup on and the synchronization gain is chosen so that certain synchronization conditions are satisfied . both , will be considered below and different methods for choosing the edge - dependent gains and will be described . a way to enhance the synchronization of the networked systems , is to employ adaptive strategies to tune the strengths of the network nodes interconnections as was similarly addressed for finite dimensional systems . in the case of adaptive synchronization gains ,the closed - loop systems are given by for .to derive the adaptive laws for the edge - dependent gains , one considers the following lyapunov - like functionals using , , its time derivative is given by while the choice results in , one may consider where are the adaptive gains .this results in , . summing from to for each , one can then show that as and for each , one also has as . to examine the well - posedness and regularity of the closed loop systems ,the state equations are written in aggregate form ,\ ] ] with . to avoid over - parametrization, we express the adaptive edge - dependent gains in terms of the elements of the time - varying graph laplacian matrix and thus ,\ ] ] with . with this representationone can write the above compactly as where , , , . following the approach for in , the adaptation of the interconnection strengths ( elements of ) is given in weak form for .for each , define the operator by with . foreach define its banach space adjoint by in view of , the adaptation is re - written as with . using , , the aggregate dynamics is given in weak form or with \\ \noalign{\medskip } \mathcal{m}^{*}(x(t))[\ , \bullet \ , ] & -\sigma \mathbf{i}_{n } \end{array}\right ] \mathcal{x}(t ) \\ \noalign{\medskip } \mathcal{x}(0 ) \in d(\mathcal{a } ) \times \theta . \end{array}\ ] ] the compact form facilitates the well - posedness of , as it makes use of established results on adaptive control of abstract evolution equations ( equations ( 2.40 ) , ( 2.41 ) of ) . [ lemma1 ] consider the systems governed by and assume that the pairs satisfy the assumption of approximately controllability with valid and that the state of each system in is available to each of its communicating neighbors .then the proposed synchronization controllers in result in a closed loop system and an adaptation law for the edge - dependent gains that culminate in the well - posed abstract system with a unique local solution .the expression is essentially in the form presented in .the skew - adjoint structure of the matrix operator , which reflects the terms that cancel out due to the adaptation , essentially facilitate the establishment of well - posedness .the -linearity of the term along with the fact that , ( thereby giving and ) yield . since the assumption on controllability gives an exponentially stable semigroup on , then one has that generates an exponentially stable semigroup on .this allows one to use the results in to establish well - posedness .in particular , one defines endowed with the inner product .additionally , let the space endowed with the norm .then we have that is a reflexive banach space with . for ,the linear operator in equation ( 2.40 ) of is now defined by and the operator by , where , , .these then fit the conditions in theorem 2.4 in .in fact , one can extend the local solutions for all and to obtain , with the control signals , .please note that is used to established well - posedness , but , and are used for implementation . while avoids over parametrization , it renders the implementation of the synchronization controllers complex . to demonstrate this , consider scalar systemswhose connectivity is described by the undirected graph in figure [ fig0 ] .the aggregate closed loop systems will need ten unknown edge - dependent gains , , , , , , , , , .when the laplacian is used , the fifteen unknown entries of the laplacian matrix are , , , , , , , , , , , , , , .of course when one enforces , then the number of unknown reduces to eleven .nonetheless , is used for analysis and is used for implementation .the convergence , for both state and adaptive gains , is established in the next lemma .[ lemma2 ] for the solution to the initial value problem , the function given by is nonincreasing , , , with and consequently . consider integrating both sides from to we arrive at application of gronwall s lemma establishes the convergence of to zero .due to the cancellation terms in the adaptation of the interconnection strengths , nothing specific was imposed on the synchronization gain operator other than . a simple way to choosethis gain is by setting it equal to the feedback gain and therefore one arrives at the aggregate state equations for , where .similar to the adaptive case , the closed - loop systems with constant edge - dependent synchronization gains are given , via , by for , or in terms of the aggregate states for simplicity , one chooses the synchronization operator gain to be identical to the feedback operator gain and thus the above closed - loop system is written as the well - posedness of can easily be established .since the operator generates a semigroup on , one can easily argue that generates a semigroup on .since , then is a positive definite matrix .consequently the operator generates an exponentially stable semigroup on .since it was assumed that , the system admits a unique solution .furthermore , one has that asymptotically converges to zero .a possible way to obtain the optimal values of the entries of the laplacian matrix is to minimize an associated energy norm .the design criteria are similar to those taken for the optimal damping distribution for elastic systems governed by second order pdes , . in this case oneseeks to find such that the associated energy of the aggregate system , satisfy , .the above is related to the stability of the closed - loop aggregate system and for that , one needs to require that the spectrum determined growth condition is satisfied .this condition essentially states that a system has this property of the supremum of the real part of eigenvalues of the associated generator equals the infimum of satisfying the above energy inequality .this optimization takes the form of making the system `` more '' stable .related to this , an alternative criterion that is easier to implement numerically , aims at minimizing the total energy of the aggregate system over a long time period over the set of admissible ( laplacian ) matrices .this criterion is realized through the solution to a -parameterized operator lyapunov equation with constrained in , i.e. any optimal value of must satisfy the conditions for graph laplacian described by .the optimal value is then given by where is the solution to the -parameterized operator lyapunov equation however , since one would like to enhance synchronization , then the cost is changed to in view of this , the proposed optimization design is the optimization above does not account for the cost of the control law .if the structure of the control law is assumed with and chosen such that generates an exponentially stable semigroup on , then one may consider the effects of the control cost when searching for the optimal value of the constant graph laplacian matrix . in this case ,the cost functional in is now modified to please note that the term is given explicitly by where are given by . unlike ,, the optimization for in this case must be performed numerically and the optimal value is to consider an optimal control for the aggregate system , without assuming a specific structure of the controller gains and , but with a prescribed constant graph laplacian matrix , one may be able to pose the synchronization problem as an optimal ( linear quadratic ) control problem .one rewrites without the assumption that the synchronization operator gain is equal to the regulation operator gain .thus when written is aggregate form produces for , where the augmented input operator and augmented control signal are given by , \hspace{2em } \widetilde{u}(t ) = \left[\begin{array}{c } u_{1}(t ) \\\noalign{\smallskip } u_{2}(t ) \end{array}\right].\ ] ] one can then formulate an optimal control policy for the aggregate system in as follows : find such that the cost functional is minimized .the solution to this lqr problem is given by in the event that any form of optimization for the edge - dependent gains can not be performed , then a static optimization can be used ; in this case , the edge - dependent gains can be chosen in proportion to the pairwise state mismatches . in the case of full connectivity , thereby simplifying the laplacian to and when the edge - dependent gains are all identical , then one may be able to obtain an expression for the dynamics of the pairwise differences as was pointed out in , when the system operator is riesz - spectral and certain conditions on the input operator are satisfied , one can obtain explicit bounds on the exponential convergence of ( in an appropriate norm ) to zero .additionally for this case one has that the convergence of is faster than that of and is a function of .the following 1d diffusion pde was considered the control distribution function was taken to be the approximation of the pulse function centered at the middle of the spatial domain ] .this cost is depicted in figure [ fig1a ] and its optimal value is attained when . to examine the effects of the synchronization gain, the norm of the aggregate deviation from the mean was evaluated for and .the evolution of is depicted in figure [ fig1b ] .as expected , the higher the value of , the faster the convergence . however , only the value results in acceptable levels of the deviation from the mean and low values of the control cost .the adaptive controller was applied to the pde with , , . the same initial conditions as in the constant gain case were used .the adaptations in were implemented with an adaptive gain of and , i.e. ,\ ] ] figure [ fig2a ] compares the adaptive to the constant edge - dependent gains case .the initial guesses of the adaptive edge - dependent gains were all taken to be .the same values of were used for the constant case .the norm of the aggregate deviation from the mean exhibits an improved convergence to zero when adaptation of the edge - dependent gains is implemented .the spatial distribution of the mean state ( ) is depicted at the final time for both the adaptive and constant gains case in figure [ fig2b ] .it is observed that when adaptation is implemented , the mean state converges ( pointwise ) to zero faster than the constant case .in this note , a scheme for the adaptation of the synchronization gains used in the synchronization control of a class of networked dps was proposed .the same framework allowed for the optimization of constant edge - dependent gains which was formulated as an optimal ( linear quadratic ) control problem of the associated aggregate system of the networked dps .the proposed scheme required knowledge of the full state of the networked dps . such a case represents a baseline for the synchronization of networked dps .the subsequent extension to output feedback , whereby each networked dps can only transmit and receive partial state information provided by sensor measurement , will utilize the same abstract framework presented here . c.wu and l. chua , `` synchronization in an array of linearly coupled dynamical systems , '' _ ieee trans . on circuits and systemsi : fundamental theory and applications _ , vol .42 , no . 8 , pp . 430447 , 1995 .h. zhang , f. lewis , and a. das , `` optimal design for synchronization of cooperative systems : state feedback , observer and output feedback , '' _ ieee trans . on automatic control _ , vol .56 , no . 8 , pp . 19481952 , 2011 .t. yang , s. roy , y. wan , and a. saberi , `` constructing consensus controllers for networks with identical general linear agents , '' _ internat . j. robust nonlinear control _ ,21 , no .11 , pp . 12371256 , 2011 .t. yang , a. a. stoorvogel , and a. saberi , `` consensus for multi - agent systems -synchronization and regulation for complex networks , '' in _ proc . of the american control conference _ , june 29 - july 1 2011 , pp . 53125317 .g. chen and f. lewis , `` distributed adaptive tracking control for synchronization of unknown networked lagrangian systems , '' _ ieee trans .on systems , man , and cybernetics , part b : cybernetics _ , vol .41 , no . 3 , pp .805816 , 2011 .s. chung , u. ahsun , and j. slotine , `` application of synchronization to formation flying spacecraft : lagrangian approach , '' _ aiaa journal of guidance , control , and dynamics _32(2 ) , pp . 512526 , 2009 .k. wang , z. teng , and h. jiang , `` adaptive synchronization in an array of linearly coupled neural networks with reaction-diffusion terms and time delays , '' _ communications in nonlinear science and numerical simulation _ , vol .17 , no .3866 3875 , 2012 .b. ambrosio and m. aziz - alaoui , `` synchronization and control of coupled reaction diffusion systems of the fitzhugh nagumo type , '' _ computers & mathematics with applications _ , vol .64 , no . 5 , pp .934 943 , 2012 .y. liu , y. jia , j. du , and s. yuan , `` dynamic output feedback control for consensus of multi - agent systems : an approach , '' in _ proc . of the american control conference _ ,june 10 - 12 2009 , pp .44704475 . w.yu , p. de lellis , g. chen , m. di bernardo , and j. kurths , `` distributed adaptive control of synchronization in complex networks , '' _ ieee transactions on automatic control _ , vol .57 , no . 8 , pp . 21532158 , 2012 .p. de lellis , m. di bernardo , f. garofalo , and m. porfiri , `` evolution of complex networks via edge snapping , '' _ ieee trans . on circuits and systemsi : regular papers _ , vol .57(8 ) , pp .21322143 , 2010 .
|
this work is concerned with the design and effects of the synchronization gains on the synchronization problem for a class of networked distributed parameter systems . the networked systems , assumed to be described by the same evolution equation in a hilbert space , differ in their initial conditions . the proposed synchronization controllers aim at achieving both the control objective and the synchronization objective . to enhance the synchronization , as measured by the norm of the pairwise state difference of the networked systems , an adaptation of the gains is proposed . an alternative design arrives at constant gains that are optimized with respect to an appropriate measure of synchronization . a subsequent formulation casts the control and synchronization design problem into an optimal control problem for the aggregate systems . an extensive numerical study examines the various aspects of the optimization and adaptation of the gains on the control and synchronization of networked 1d parabolic differential equations . distributed parameter systems ; distributed interacting controllers ; networked systems ; adaptive synchronization = 1ex = 1ex = 1ex = 1ex
|
the mystery of rogue water waves started from folklores of mariners centuries ago .their existence was scientifically confirmed on new year s day 1995 at the draupner platform in the north sea . in oceanography ,rogue waves are defined as waves with height more than twice the significant wave height ( swh ) .swh is the average of the top third wave heights in a wave record .a rogue wave is often a single tall wave that is localized in both space and time , and appears without warning in mid - ocean .the key in theoretical understanding of rogue waves is : * what is the mechanism of a rogue wave ?once the mechanism of a rogue wave is understood , it will be easier to understand the causes in different oceanic environments , that can lead to the mechanism to be in action .the consequences of rogue waves have been suspected for many ship sinking incidents . due to their importance in application and theory ,rogue waves have been extensively studied , for a sample of references , see .can homoclinic orbits or peregrine wave solutions be responsible for rogue water waves ?this is the interesting question asked by many researchers .peregrine wave solutions look like " rogue water waves .they share the spatial and temporal locality of rogue waves . in infinite spatial and temporal ( both positive and negative ) limits , they approach the uniform stokes waves , and their main humps also have tall enough heights to mimic rogue waves .one of the simplest deep water weakly nonlinear amplitude model equations is the integrable 1d cubic focusing nonlinear schrdinger equation a simple peregrine wave solution to ( [ nls ] ) is e^{-i2 t } .\label{rs}\ ] ] the peregrine wave solution can be obtained by taking the infinite spatial period limit to the spatially periodic and temporally homoclinic solutions to be discussed below . from now on ,we will focus our attention on the peregrine wave s approximations given by large spatial period homoclinic solutions .therefore , we pose the spatial periodic boundary condition to ( [ nls ] ) .the nls ( [ nls ] ) with the periodic boundary condition ( [ bc ] ) defines a dynamical system in the infinite dimensional phase space } ] .specifically , the norm of is given by } } = \int_0^l ( |q|^2 + |q_x|^2 ) dx .\ ] ] one way to visualize dynamics in the infinite dimensional phase space } ] , and the neighborhood is where we will focus our attention on .linearize the nls ( [ nls ] ) at ( [ sw ] ) in the form one gets the linearized equation set where , and are complex parameters , and is a real parameter given by to satisfy the boundary condition ( [ bc ] ) .one gets -i { \omega}\right ) a + 2 a^2 \bar{b } = 0 , \\ & & 2 a^2 a + \left ( [ 2a^2-k^2]+i { \omega}\right ) \bar{b } = 0 , \end{aligned}\ ] ] and the relation ^2 } .\ ] ] when there is the so - called modulational instability .for any , when , the instability appears .that is , no matter how small is , as long as is large enough , the instability appears . for fixed and ,the unstable modes are given by those s satisfying ( [ mi ] ) .let be the number of such unstable modes .then the unstable subspace of the periodic orbit ( [ sw ] ) has dimension , the stable subspace of the periodic orbit ( [ sw ] ) has dimension , and the center subspace of the periodic orbit ( [ sw ] ) has codimension .the product of the unstable subspace and the center subspace is the codimension center - unstable subspace , and the product of the stable subspace and the center subspace is the codimension center - stable subspace .these subspaces can be exponentiated into invariant submanifolds under the nls ( [ nls ] ) dynamics via darboux transformations .we have under the nls ( [ nls ] ) dynamics , the periodic orbit ( [ sw ] ) on the invariant plane has a codimension center manifold , a codimension center - unstable manifold , and a codimension center - stable manifold .moreover , and .explicit formulae for certain homoclinic orbits inside can be found in the appendix .the neighborhood of the periodic orbit ( stokes wave ( [ sw ] ) ) is divided by and into different regions .dynamics in the neighborhood of the periodic orbit follows the following inclination lemma .[ inclination lemma ] all orbits starting from initial points in the neighborhood of the periodic orbit approach the center - unstable manifold in forward time .see figure [ il1 ] for an illustration .notice that the center manifold is a measure zero subset of the neighborhood of the periodic orbit , and it is also a measure zero subset of .orbits starting from points inside of course stay inside .orbits starting from points inside but not in have the same homoclinic feature as those explicitly calculated in the appendix . in principle , all such orbits in can be constructed via darboux transformations as shown in the appendix .one can view all such orbits as rooted to the center manifold .in fact , each point in the center manifold is a fenichel fiber base point , and the fenichel fibers capture the global features of these homoclinic orbits . since the center manifold lies inside the neighborhood of the periodic orbit , those homoclinic orbits rooted to the invariant plane are good approximations of all such homoclinic orbits in which in general may have small amplitude oscillating tails in space and time .these homoclinic orbits in are generic orbits in in the sense that is a measure zero subset of . in view of the inclination lemma , generic orbits starting from initial points in the neighborhood of the periodic orbit approach those homoclinic orbits in which can be approximated by those homoclinic orbits rooted to the invariant plane .the infinite spatial period limits of the homoclinic orbits rooted to the invariant plane are the peregrine waves . in conclusion ,generic orbits starting from initial points in the neighborhood of the periodic orbit ( stokes wave ) have the homoclinic feature and peregrine wave feature ( when the spatial period approaches infinity ) .therefore , such homoclinic orbits and peregrine waves should be the most observable ( common ) waves in the deep ocean according to the nonlinear schrdinger model. they should not be the rarely observed rogue waves .when the nonlinear schrdinger equation ( [ nls ] ) is under perturbations ( for example by keeping higher order terms in the nls model of deep water ( [ pnls ] ) ) , the center - unstable manifold , center - stable manifold and center manifold persist , but and do not coincide anymore .orbits inside have a near homoclinic nature .the above conclusion that homoclinic orbits and peregrine waves should be the most observable common waves rather than rogue waves , still holds .based upon the above rigorous mathematical analysis on the infinite dimensional phase space where the nonlinear schrdinger equation ( [ nls ] ) defines a dynamical system , we conclude that peregrine waves and homoclinic orbits are the waves most commonly observable in deep ocean rather than rogue water waves .next we discuss two other possibilities for the mechanism of rogue waters .the solution operator of high reynolds number navier - stokes equations has rough dependence on initial data .temporal amplification of certain perturbations to the initial data can potentially reach where is a constant and is the reynolds number .when the reynolds number is large , such amplification can reach substantial amount in very short time .this feature of the solution operator may explain the ( no apparent reason ) sudden amplification of one wave among many into a rogue wave in the deep ocean .that particular wave may receive just the right perturbation which amplifies superfast like the above estimate , and very quickly develops into a rogue wave . in this sense , the choice of the particular wave is random , the right perturbation is random , and the temporal and spatial locations of the event are also random .all these factors may manifest into a sudden appearance of a rogue wave .high reynolds number navier - stokes equations are good models of water waves since real fluids ( water or air ) always have viscosity ( no matter how slight it may be ) . on the other hand , for simplicity ,most mathematical models of water waves are derived from euler equations , and the solution operator of the euler equations is nowhere differentiable in its initial data ( formally one can set to infinity in the above estimate ( [ est ] ) ) .a great open problem is whether or not water wave equations have finite time blowup solutions .a hint of finite time blowup solutions comes from simple nonlinear wave equations , for example , the one dimensional nonlinear schrdinger equation where is a complex - valued function of ( ) . for the initial condition of the form where is a real - valued function , when the initial energy is non - positive and , the solution blows up in finite time is , there is a finite time , such that such a finite time blowup solution resembles very much a rogue wave in terms of spatially and temporally local nature .one should only take such a finite time blowup solution as a hint rather than a clear indication for a possible finite time blowup solution to the water wave equations .there are a lot of simple models of water wave equations , for example , the davey - stewartson equations . for the davey - stewartson equations with coefficients in the water wave regime , a finite time blowup solution has not been found .for the davey - stewartson equations with coefficients outside the water wave regime , finite time blowup solutions have been found . in the deep water limit, the davey - stewartson equations reduce to the following equation where is complex - valued and this equation has two conserved quantities dxdy .\ ] ] since the two conserved quantities do not bound norm , this equation may have finite time blowup solutions. when the operator is replaced by there are indeed finite time blowup solutions .linearize equation ( [ hnls ] ) at where is the amplitude and is the phase , in the form one gets the linearized equation set where , and are complex parameters , and ( ) are real parameters , one gets -i { \omega}\right ) a + 2 a^2 \bar{b } = 0 , \\ & & 2 a^2 a + \left ( [ ( k_2 ^ 2-k_1 ^ 2)+2a^2]+i { \omega}\right ) \bar{b } = 0 , \end{aligned}\ ] ] and the relation (k_1 ^ 2-k_2 ^ 2 ) } .\ ] ] when there is a modulational instability .in one spatial dimension , equation ( [ hnls ] ) reduces to the integrable cubic nonlinear schrdinger equation ( [ nls ] ) . by keeping higher order terms ,the one spatial dimension deep water wave model can be written as where represents the higher order terms which may involve a variety of terms like higher order derivatives and higher order nonlinearities . with the higher order terms in , equation ( [ pnls ] ) may have finite time blowup solutions . invoking possible finite time blowup solutions to models of water wave equationsis paradoxical in the search for finite time blowup solutions to the full water wave equations .most of these models are derived under the assumption of weak nonlinearity , while finite time blowup is a strongly nonlinear phenomenon .let . when the stokes wave ( [ sw ] ) has one linearly unstable mode , and when the stokes wave ( [ sw ] ) has two linearly unstable modes , etc. the homoclinic orbits asymptotic to the stokes wave ( [ sw ] ) are the nonlinear amplifications of the linearly unstable modes . when , the homoclinic orbit is given by ^{-1 } \cdot \bigg [ \cos 2{\vartheta}_0 - i \sin 2{\vartheta}_0 \tanh \tau \nonumber \\ & & - \sin { \vartheta}_0 \ \mbox{sech } \tau \cos y \bigg ] \ , \label{sne}\end{aligned}\ ] ] where where , , and are real parameters .as , thus is asymptotic to up to phase shifts as .we say is a homoclinic orbit asymptotic to the periodic orbit given by . for a fixed amplitude of , the phase of and the bcklund parameters and parametrize a -dimensional submanifold with a figure eight structure . for an illustration ,see figure [ snef ] .if one restricts the bcklund parameter by , or , one gets to be even in , ^{-1 } \nonumber \\ & & \cdot \bigg [ \cos 2{\vartheta}_0 - i \sin 2{\vartheta}_0 \tanh \tau \mp \sin { \vartheta}_0 \ \mbox{sech } \tau \cos x \bigg ] \ , \label{se}\end{aligned}\ ] ] where the upper sign corresponds to .then for a fixed amplitude of , the phase of and the bcklund parameter parametrize a -dimensional submanifold with a figure eight structure . for an illustration ,see figure [ sef ] . when , the homoclinic orbit is given by where is given in ( [ sne ] ) , \\ & & \cdot ( 1 + \sin { \hat{\vartheta}}_0 \ \mbox{sech } { \hat{\tau}}\cos { \hat{y } } ) \\ & - & \frac { 1}{2 } \sin 2{\vartheta}_0 \sin 2{\hat{\vartheta}}_0 \ \mbox{sech } \tau \ \mbox{sech } { \hat{\tau}}(1+\sin { \vartheta}_0 \ \mbox{sech } \tau \cosy ) \sin y \sin { \hat{y}}\\ & + & ( \sin { \vartheta}_0)^2 \bigg [ 1 + 2 \sin { \vartheta}_0 \ \mbox{sech } \tau \cos y + [ ( \cos y)^2 - ( \cos { \vartheta}_0)^2](\mbox{sech } \tau)^2 \bigg ] \\ & & \cdot ( 1 + \sin { \hat{\vartheta}}_0 \ \mbox{sech } { \hat{\tau}}\cos { \hat{y } } ) \\ & - & 2\sin { \hat{\vartheta}}_0 \sin { \vartheta}_0 \bigg [ \cos { \hat{\vartheta}}_0 \cos { \vartheta}_0 \tanh { \hat{\tau}}\tanh \tau + ( \sin { \vartheta}_0 + \ \mbox{sech } \tau \cos y)\\ & & \cdot ( \sin { \hat{\vartheta}}_0 + \ \mbox{sech } { \hat{\tau}}\cos { \hat{y } } ) \bigg ] ( 1 + \sin { \vartheta}_0 \ \mbox{sech } \tau \cos y ) \ , \end{aligned}\ ] ] \\ & & \cdot ( \sin { \hat{\vartheta}}_0 + \ \mbox{sech } { \hat{\tau}}\cos { \hat{y}}+ i \cos { \hat{\vartheta}}_0 \tanh { \hat{\tau } } ) \\ & + & 2 ( \sin { \vartheta}_0)^2(-\cos { \vartheta}_0 \tanh \tau +i \sin { \vartheta}_0 + i\ \mbox{sech } \tau \cos y)^2 \\ & & \cdot ( \sin { \hat{\vartheta}}_0 + \ \mbox{sech } { \hat{\tau}}\cos { \hat{y}}- i \cos { \hat{\vartheta}}_0 \tanh { \hat{\tau } } ) \\ & + & 2 \sin { \vartheta}_0 ( \sin { \vartheta}_0 + \ \mbox{sech } \tau \cos y + i \cos { \vartheta}_0 \tanh \tau ) \\ & & \cdot \bigg [ 2 \sin { \hat{\vartheta}}_0 ( 1 + \sin { \vartheta}_0 \ \mbox{sech } \tau \cos y ) ( 1 + \sin { \hat{\vartheta}}_0 \ \mbox{sech } { \hat{\tau}}\cos { \hat{y}})\\ & & - \sin 2{\vartheta}_0 \cos { \hat{\vartheta}}_0 \ \mbox{sech } \tau \ \mbox{sech } { \hat{\tau}}\sin y \sin { \hat{y}}\bigg ] \ , \end{aligned}\ ] ] where some of the notations are given in ( [ sne ] ) , and and , , and are real parameters .the asymptotic phase of is as follows , as , thus is asymptotic to up to phase shifts as . for a fixed amplitude of , the phase of and the bcklund parameters , , , and parametrize a -dimensional submanifold with a figure eight structure . for an illustration , see figure [ dnef ] . if one put restrictions on the bcklund parameters and , s.t . then is even in .thus for a fixed amplitude of , the phase of and the bcklund parameters and parametrize a -dimensional submanifold with a figure eight structure .for an illustration , see figure [ def ] .a. slunyaev , e. pelinovsky , a. sergeeva , a. chabchoub , n. hoffmann , m. onorato , n. akhmediev , super - rogue waves in simulations based on weakly nonlinear and fully nonlinear hydrodynamic equations , _ phys .e _ * 88 * ( 2013 ) , 012909 .
|
the mechanism of a rogue water wave is still unknown . one popular conjecture is that the peregrine wave solution of the nonlinear schrdinger equation ( nls ) provides a mechanism . a peregrine wave solution can be obtained by taking the infinite spatial period limit to the homoclinic solutions . in this article , from the perspective of the phase space structure of these homoclinic orbits in the infinite dimensional phase space where the nls defines a dynamical system , we exam the observability of these homoclinic orbits ( and their approximations ) . our conclusion is that these approximate homoclinic orbits are the most observable solutions , and they should correspond to the most common deep ocean waves rather than the rare rogue waves . we also discuss other possibilities for the mechanism of a rogue wave : rough dependence on initial data or finite time blow up . [ multiblock footnote omitted ]
|
space - time ( st ) coding is a bandwidth - efficient transmission technique that can improve the reliability of data transmission in mimo wireless systems .orthogonal space - time block coding ( ostbc ) is one of the most attractive st coding approaches because the special structure of orthogonality guarantees a full diversity and a simple ( linear ) maximum - likelihood ( ml ) decoding .the first ostbc design was proposed by alamouti in for two transmit antennas and was then extended by tarokh _ et ._ in for any number of transmit antennas . a class of ostbc from complex design with the code rate of was also given by tarokh _ et_ in .later , systematic constructions of complex ostbc of rates for or transmit antennas for any positive integer were proposed in .however , the ostbc has a low code rate not more than for more than two transmit antennas . to enhance the transmission rate of the stbc ,various stbc design approaches were proposed such as quasi - ostbc and algebraic number theory based stbc .the quasi - ostbc increases the code rate by relaxing the orthogonality condition on the code matrix , which was originally proposed in , , and , independently .due to the group orthogonality , the ml decoding is performed pair - wise or group - wise with an increased complexity compared to the single - symbol decoding . in , quasi - ostbcwas studied in the sense of minimum decoding complexity , i.e. , a real pair - wise symbols decoding . in ,the pair - wise decoding was generalized to a general group - wise decoding .the decoding for these codes is the ml decoding and their rates are basically limited by that of ostbc .the algebraic number theory based stbc are designed mainly based on the ml decoding that may have high complexity and even though some near - ml decoder , such as sphere decoder can be used , the expected decoding complexity is still dominated by polynomial terms of a number of symbols which are jointly detected . to reduce the large decoding complexity of the high rate stbc aforementioned , several fast - decodable stbcwere recently proposed .the stbc proposed in achieves a high rate and a reduced decoding complexity at the cost of loss of full diversity .the fast - decodable stbc in can obtain full rate , full diversity and the reduced ml decoding complexity , but the code design is limited to and mimo transmissions only .another new perspective of reducing the decoding complexity was recently considered in and to resort to conventional linear receivers such as zero - forcing ( zf ) receiver or minimum mean square error ( mmse ) receiver instead of the ml receiver to collect the full diversity .the outage and diversity of linear receivers in flat - fading mimo channels were studied in , but no explicit code design was given to achieve the full diversity when the linear receivers are used .based on the new stbc design criterion for mimo systems with linear receivers , toeplitz stbc and overlapped - alamouti codes were proposed and shown to achieve the full diversity with the linear receivers . recently , some other new designs of stbc with linear receivers were proposed . however , the code rate of stbc achieving full diversity with linear receivers is upper bounded by one .later , guo and xia proposed a partial interference cancellation ( pic ) group decoding scheme which can be viewed as an intermediate decoding approach between the ml receiver and the zf receiver by trading a simple single - symbol decoding complexity for a high code rate larger than one symbol per channel use .moreover , in an stbc design criterion was given to achieve full diversity when the pic group decoding is applied at the receiver .the proposed pic group decoding in was also connected with the successive interference cancellation ( sic ) strategy to aid the decoding process , referred to as pic - sic group decoding .a few code design examples were presented in , but a general design of stbc achieving full diversity with the pic group decoding remains an open problem . in this paper , we propose two designs of stbc which can achieve full diversity with the pic group decoding for any number of transmit antennas .the first proposed stbc have a structure of multiple diagonal layers and for each diagonal layer there are exactly coded symbols embedded , being equal to the number of transmit antennas , which are obtained from a cyclotomic lattice design .indeed , each diagonal layer of the coded symbols can be viewed as the conventional rate - one diagonal stbc .the code rate of the proposed stbc can be from one to symbols per channel use by adjusting the codeword length , i.e. , embedding different number of layers in the code matrix . with the pic group decoding the code rate of the first proposed full - diversity stbc can be only up to symbols per channel use , i.e. , for two layers . for more than two layers embedded in the codeword ,the code rate is increased at the cost of losing full diversity with the pic group decoding .however , with the pic - sic group decoding , the proposed stbc with arbitrary number of layers can obtain full diversity and the code rate can be up to .the second proposed stbc is designed with three layers of information symbols embedded in the codeword and the pic group decoding can be performed in three separate groups accordingly . without loss of decoding complexity compared to the first proposed stbc, the second proposed stbc can achieve full diversity and a code rate larger than .note that the code rate for the first proposed full - diversity stbc with pic group decoding can not be above . in the pic group decoding of the proposed stbc ,every neighboring columns of the equivalent channel matrix are clustered into one group .this paper is organized as follows . a system model of st transmission over mimo channels with the pic group decodingis introduced in section [ sec : system ] . in section[ sec : new ] , a design of high rate stbc with the pic group decoding is proposed , which contains multiple diagonal layers of coded symbols . for a particular code design with two diagonal layers , the full diversity with the pic group decoding is proved . for the code with pic - sic group decoding , the full diversity is shown for any number of diagonal layers .several full - diversity code design examples are given in section [ sec : example ] . in section [ sec : xu ] , another design of high rate stbc with the pic group decoding is proposed , which can achieve full diversity with three layers .simulation results are presented in section [ sec : sim ] .finally , in section [ sec : conclusion ] , we draw our conclusions. _ notations _ : column vectors ( matrices ) are denoted by boldface lower ( upper ) case letters .superscripts and stand for transpose and conjugate transpose , respectively . denotes the field of complex numbers . denotes the identity matrix , and denotes the matrix whose elements are all . is the vectorization of matrix by stacking the columns of on top each other .in this section , we first briefly describe the system model and then describe the pic group decoding proposed in .we consider a mimo transmission with transmit antennas and receive antennas over block fading channels .the received signal matrix is where is the codword matrix , transmitted over time slots , is a noise matrix with independent and identically distributed ( i.i.d . ) entries being circularly symmetric complex gaussian distributed , is the channel matrix whose entries are also i.i.d . with the distribution , denotes the average signal - to - noise ratio ( snr ) per receive antenna and is the normalization factor to ensure that the average energy of the coded symbols transmitting from all antennas during one symbol period is .the realization of is assumed to be known at the receiver , but not known at the transmitter .therefore , the signal power is allocated uniformly across the transmit antennas .let be the number of independent information symbols per codeword , selected from a complex constellation .the code rate of the stbc is defined as symbols per channel use . if , the stbc is said to have full rate , i.e. , symbols per channel use . in this paper, we consider that information symbols are coded by linear dispersion stbc as where is the linear stbc matrix . to decode the transmitted sequence at the receiver , we need to extract from .this can be done by as follows .by substituting ( [ eqn : lstbc ] ) into ( [ eqn : y ] ) , we get then , by taking vectorization of the matrix we have where , , ^t ] denotes the -th row of . for an ml receiver ,the estimate of that achieves the minimum of the squared frobenius norm is given by in the ml decoding , computations of squared frobenius norms for all possible codewords are needed and therefore result in prohibitively huge computational complexity when the length of the information symbols vector to be decoded is large . in the following , we give a metric to evaluate the computational complexity of the ml decoding , which is the same as the one shown in ( * ? ? ?* definition 2 ) . the decoding complexity is defined as the number of squared frobenius norms that should be computed in the decoding process . with the above definition, we have the following two remarks .the decoding complexity of the zf detection is , i.e. , times of the cardinality of the signal constellation .it is equivalent to the single - symbol decoding complexity .the decoding complexity of the ml detection is , i.e. , the complexity of the _ full _ exhaustive search of all information symbols drawn from the constellation .we next describe the pic group decoding studied in .define index set as where is the number of information symbols in .we then partition into groups : with where is the cardinality of the subset .we call a grouping scheme .for such a grouping scheme , we have define ^t,\,\,\,p=1 , \cdots , p.\\ \mathbf{g}_{p}&=&\left[\begin{array}{cccc } \mathbf{g}_{i_{p,1 } } & \mathbf{g}_{i_{p,2 } } & \cdots & \mathbf{g}_{i_{p , l_p } } \end{array } \right],\,\,\,p=1 , \cdots , p.\end{aligned}\ ] ] with these notations , ( [ eqn : y2 ] ) can be written as suppose we want to decode the symbols embedded in the group .the pic group decoding first implements linear interference cancellation with a suitable choice of matrix in order to completely eliminate the interferences from other groups , i.e. , , and . then , we have where the interference cancellation matrix can be chosen as follows , in case , \end{aligned}\ ] ]has full column rank .if does not have full column rank , then we need to pick a maximal linear independent vector group from and in this case a projection matrix can be found too .afterwards , the symbols in the group are decoded with the ml decoding algorithm as follows , the above pic group decoding is connected to some of the known decodings as in the following remarks .for one special case of , the grouping scheme is with . from ( [ eqn : qp ] ) , we have .then , the pic group decoding is equivalent to the ml decoding where all information symbols are jointly decoded . for the special case of ,the grouping scheme is i.e. , every single symbol is regarded as one group .then , the pic group decoding is equivalent to the zf decoding where every single symbol is separated from all the other symbols and then decoded .the pic group decoding with can be viewed as an intermediate decoding approach between the ml decoding and the zf decoding .alternatively , the ml decoding and the zf decoding can both be regarded as the special cases of the pic group decoding corresponding to and , respectively . for the pic group decoding ,the following two steps are needed : the group zero - forcing to cancel the interferences coming from all the other groups as shown in ( [ eqn : gzf ] ) and the group ml decoding to jointly decode the symbols in one group as shown in ( [ eqn : gml ] ) .therefore , the decoding complexity of the pic group decoding should reside in the above two steps .note that the interference cancellation process shown in ( [ eqn : gzf ] ) mainly involves with linear matrix computations , whose computational complexity is small compared to the ml decoding for an exhaustive search of all candidate symbols .therefore , to evaluate the decoding complexity of the pic group decoding , we mainly focus on the computational complexity of the ml decoding within the pic group decoding algorithm . according to _ definition 2 _ ,the ml decoding complexity in the pic group decoding algorithm is .it can be seen that the pic group decoding provides a flexible decoding complexity which can be from the zf decoding complexity to the ml decoding complexity . in , an sic - aided pic group decoding algorithm , namely pic - sic group decoding was proposed .similar to the blast detection algorithm , the pic - sic group decoding is performed after removing the already - decoded symbol set from the received signals to reduce the interference .if each group has only one symbol , then the pic - sic group will be equivalent to the blast detection .the performance of a decoding algorithm for a wireless communication system is related to the diversity order .if the average probability of a detection error for communication over a fading channel usually behaves as : where is a constant and is called the _ diversity order _ of the system . for an mimo communication system ,the maximum diversity order is , i.e. , the product of the number of transmit antennas and the number of receiver antennas . in order to optimize the reception performance of the mimo system , a full diversity is usually pursued which can be achieved by a proper signal transmission scheme or data format ( e.g. , stbc ) . in , the `` rank - and - determinant criterion '' of stbc designwas proposed to maximize both the diversity gain and the coding gain of the mimo system with an ml decoding .recently , in an stbc design criterion was derived to achieve full diversity when the pic group decoding is used at the receiver . in the following ,we cite the main result of the stbc design criterion proposed in .[ prop1 ] ( * ? ? ? * theorem 1 ) [ _ full - diversity criterion under pic group decoding _ ] for an stbc with the pic group decoding , the full diversity is achieved when 1 .the code satisfies the full rank criterion , i.e. , it achieves full diversity when the ml receiver is used ; _ and _ 2 . are linearly independent vector groups for any . in ,the stbc achieving full diversity with pic group decoding were proposed for and transmit antennas . however, a systematic code design of the full - diversity stbc with pic group decoding remains an open problem .[ prop2 ] [ _ full - diversity criterion under pic - sic group decoding _ ] for an stbc with the pic - sic group decoding , the full diversity is achieved when 1 .the code satisfies the full rank criterion , i.e. , it achieves full diversity when the ml receiver is used ; _ and _ 2 . at each decoding stage , , which corresponds to the current to - be decoded symbol group , the remaining groups corresponding to yet uncoded symbol groups are linearly independent vector groups for any .in this section , we first propose a systematic design of high - rate stbc which has a rate up to symbols per channel use and achieves full diversity with the ml decoding .the systematic design of the stbc is structured with multiple diagonal layers .then , we prove that the proposed stbc with two diagonal layers can obtain full diversity with the pic group decoding and the code rate can be up to symbols per channel use . finally , we prove that the proposed stbc with any number of diagonal layers can obtain full diversity with pic - sic group decoding and the code rate can be up to symbols per channel use .our proposed space - time code , i.e. , in ( [ eqn : y ] ) , is of size ( for any given , and ) and will be transmitted from antennas over time slots .let . the symbol stream ( composed of complex symbols chosen from qam constellation and then scaled by } ] is given by and the information symbol vector is given by ^t , \end{aligned}\ ] ] for .the proposed stbc in ( [ eqn : code ] ) has asymptotically full rate when the block length is sufficiently large . in the codeword in ( [ eqn : code ] ) ,a total number of independent information symbols are encoded into the codeword , which is then transmitted from antennas over time slots .the code rate of transmission is therefore for a very large block length , it can be seen that the rate of the proposed st coding scheme approaches symbols per channel use , i.e. the full rate . in ,the rotation matrix was designed for diagonal stbc to achieve the full diversity gain and the optimal diversity product . with the optimal cyclotomic lattices design for transmit antennas , from ( * ? ? ?* table i ) we can get a set of integers and let .then , the optimal lattice is given by ( * ? ? ?* eq . ( 16 ) ) .\end{aligned}\ ] ] where with and are distinct integers such that and are co - prime for any ._ example 1 _ : for transmit antennas we can choose and according to ( * ? ? ?* table i ) .then , in order to ensure that and are co - prime for any we can obtain , , .when , the signal constellation is located on the equal literal triangular lattice . when , can be and can be , and in this case the signal constellation is located on the square lattice ._ example 2 _ : for transmit antennas we can select and . then , , , , .the cyclotomic design of the matrix is vital for the design of the algebraic stbc . in the following ,we show some properties of the matrix that will be used later for our design . the diagonal cyclotomic st code defined by \right\} ] and is given by ( [ eqn : cyclo ] ) .every entry of the matrix in ( [ eqn : cyclo ] ) is non - zero .this property is obvious from ( [ eqn : cyclo ] ) .we show the main result of the proposed stbc when an ml decoding is used at the receiver , as follows .consider a mimo transmission with transmit antennas and receive antennas over block fading channels .the stbc as described in ( [ eqn : code ] ) achieves full diversity under the ml decoding . in order to prove that the st code in ( [ eqn : code ] ) can obtain full diversity under ml decoding , it is sufficient to prove that achieves full rank for any distinct pair of st codewords and . for any pair of distinct codewords and , there exists at least one index ( ) such that , where and are related to and from ( [ eqn : rm ] ) , respectively .let denote the minimum index of vectors satisfying .then , for any index with , it must have .define as the difference between symbols and . then, from ( [ eqn : code ] ) can be expressed as , \end{aligned}\ ] ] where for .this is because for , it exists . due to the suitably chosen constellation rotation matrix in ( [ eqn : cyclo ] ) , must have nonzero entries for any . then, the matrix has full rank .the full rankness of can be examined similar to that for the toeplitz code ( or delay diversity code ) by checking if the columns of are linearly independent .specifically , we establish with ^t ] and the group is linearly independent from the groups for any and where is given by ( [ eqn : g ] ) , according to _ proposition 2 _ the full diversity can be easily proved .the detailed proof is omitted .in this section , we show a few code design examples .we denote the code constructed by ( [ eqn : code ] ) for given parameters : the number of transmit antennas , the block length of the code , and the number of groups to be decoded in the pic group decoding . for notational brevity, we only show the equivalent channel of the proposed codes for miso systems .consider a code for 2 transmit antennas with 3 time slots .according to the code structure ( [ eqn : code ] ) , we have ,\end{aligned}\ ] ] where ^t=\mathbf{\theta}[\begin{array}{cc } s_1 & s_2 \end{array } ] ^t ] .the constellation rotation matrix can be chosen as ,\ ] ] where and with .the code rate of the code is .in fact , this code is equivalent to the one proposed in ( * ? ? ?* section vi - _ example 1 _ ) . the equivalent channel of the code is given by .\end{aligned}\ ] ] the grouping scheme for the pic group decoding is and .it can be seen that and are linearly independent . then, the code can obtain full diversity with the pic group decoding . for given ,the code achieving full diversity with the pic group decoding can be designed as follows , .\end{aligned}\ ] ] this code has a code rate of and two groups to be decoded . the equivalent channel of the code is & \left[\begin{array}{c } \mathbf{0}_{1\times 4 } \\ \mathbf{b } \end{array } \right ] \end{array } \right ] , \end{aligned}\ ] ] where is given by \end{aligned}\ ] ] with being the ( )-th entry of the matrix for .the grouping scheme for the pic group decoding is and .it can be seen that the groups and are linearly independent to each other . then, the code can obtain full diversity with the pic group decoding .consider time slots .we can get .\end{aligned}\ ] ] the code rate of the code is which has the same rate as the one proposed in ( * ? ? ?* section vi - _ example 2 _ ) .the equivalent channel of the code is & \left[\begin{array}{c } \mathbf{0}_{2\times 4 } \\\mathbf{b } \end{array } \right ] \end{array } \right ] , \end{aligned}\ ] ] where is given by ( [ eqn : b ] ) .because the groups and are linearly independent to each other .then , the code can obtain full diversity with the pic group decoding .moreover , we can also design the code for with layers ( i.e. , ) as follows , .\end{aligned}\ ] ] the code rate of the code is and the equivalent channel is given by , \end{aligned}\ ] ] where , & \mathbf{g}_2=\left[\begin{array}{c } \mathbf{0}_{1\times 4}\\ \mathbf{b } \\ \mathbf{0}_{1\times 4 } \end{array } \right ] , & \mathbf{g}_3=\left[\begin{array}{c } \mathbf{0}_{1\times 4 } \\ \mathbf{0}_{1\times 4}\\ \mathbf{b } \\ \end{array } \right ] .\end{array}\nonumber \end{aligned}\ ] ] it can be proved that the groups , , and are not linearly independent groups .therefore , according to _ proposition 1 _ , the code can not achieve the full diversity with the pic group decoding .however , the code can obtain full diversity with pic - sic group decoding .this is because is linearly independent from and , and is linearly independent from .according to _ proposition 2 _ , with pic - sic group decoding and a proper decoding order or , the code can achieve the full diversity . for given and , the code is designed as follows , .\end{aligned}\ ] ] the code rate of the code is .the equivalent channel is & \left[\begin{array}{c } \mathbf{0}_{1\times 5 } \\\mathrm{diag}(\mathbf{h})\mathbf{\theta}_5 \end{array } \right ] \end{array } \right ] , \end{aligned}\ ] ] where is the rotation matrix of size .the grouping scheme for the pic group decoding of is and .it can be seen that the groups and are linearly independent to each other . then, the code can obtain full diversity with the pic group decoding .notice that the code design in ( [ eqn : code ] ) can only achieve the full diversity with pic group decoding for two diagonal layers ( r.f ._ theorem 2 _ ) and the code rate is not larger than symbols per channel use . with ( ) diagonal layers in the code ( [ eqn : code ] ) , the rate can be increased but the independence among channel groups in ( [ eqn : h0 ] ) is not satisfied , thereby may lose the full diversity gain . in this section ,we propose a new code design which can achieve full diversity with pic group decoding and a rate above . for ( is an integer ) ,our proposed stbc for transmit antennas is given by ,\ ] ] where the symbol vector ^t ] is the information symbol vector . for , our proposed stbc given by .\ ] ] for , our proposed stbc is given by .\end{aligned}\ ] ] the proposed stbc has the rate as follows : for , in the codeword ( [ eqn : codeword ] ) information symbols are sent over time slots .thus , the code rate is .similarly , it is easy to prove the code rate for cases of and .[ asymptotic rate ] it is obvious that the code rate ( [ eqn : rate2 ] ) approaches to when a large number of transmit antennas are used .its full diversity property will be proved in the next subsection .however , the full diversity code proposed by the first design in ( [ eqn : code ] ) can not achieve a rate more than 2 , which was shown in _ theorem 2_. [ rate comparison ] note that the code design in ( [ eqn : code ] ) can achieve full diversity with the pic group decoding for only and the rate is . in tablei , the comparison of the code rate between the first code design in ( [ eqn : code ] ) and the second design in ( [ eqn : codeword])-([eqn : codeword3 ] ) is given .c + + + + + + + + + & c code ( [ eqn : code ] ) + .comparison in code rate ( symbols per channel use ) [ cols="^",options="header " , ] + + [ decoding complexity ] the decoding complexity of the proposed stbc with the pic group decoding is equivalent to the ml decoding of independent information symbols jointly .according to _ definition 2 _ , the ml decoding complexity in the pic group decoding algorithm is .next , we show that the proposed stbc in ( [ eqn : codeword ] ) achieves full diversity when a pic group decoding is used at the receiver. let the stbc as described in ( [ eqn : codeword ] ) be used at the transmitter .there are transmit antennas and receive antenna .if the received signal is decoded using the pic group decoding with the grouping scheme , where for , then the code achieves the full diversity . in order to prove the _ theorem 4_ , let us first introduce the following lemma .consider the system as described in _ theorem 4 _ with , and the channel matrix ^t ] .firstly , we show that the proposed stbc in ( [ eqn : codeword ] ) achieves the full diversity with ml decoding , i.e. , achieves full rank for any distinct pair of codewords and .since and are distinct , at least one pair of the information symbol vectors and are different .suppose only one pair of the information symbol vectors are distinct , say and . by the _ property 1 _ , for all . in this case ,the matrix is exactly a diagonal matrix with the rows rearranged .thus , it achieves full rank .if all three pairs of the information symbol vectors are distinct , then for all and .it is easy to see from the codeword structure that is exactly a lower triangular matrix with nonzero diagonal entries .thus , it achieves full rank .if two pairs of the information symbol vectors are distinct and the other one pair of the information symbol vectors are the same , say . after replacing the ( )th row by the ( )th row for ,the matrix becomes a lower triangular matrix with nonzero diagonal entries .thus , it achieves full rank .as shown above , has full rank for all \neq[\begin{array}{ccc } \mathbf{s}'_1 & \mathbf{s}'_2 & \mathbf{s}'_3 \end{array } ] ] is given by ( [ eqn : rm ] ) .then , ( [ eqn : code ] ) can be written as for miso systems , we have ^t ] .[ def3] .let be groups of vectors .the vector groups are said to be linearly independent if for , is independent of the remaining vector groups . using ( [ eqn : g ] ) , we can express and as , respectively , ,\,\,\,\,\,\,\ , \mathbf{g}_2= \left[\begin{array}{c } \mathbf{0}_{1\times m } \\\mathrm{diag}(\mathbf{h})\mathbf{\theta } \end{array } \right ] .\end{aligned}\ ] ] let be the -th ( ) column of the matrix . in order to prove that is independent of , from _ definition [ def2 ] _ we see it is equivalent to prove that the vector is independent of for all .further , using _ definition [ def1 ] _ it is equivalent to prove that where denotes an vector .equivalently , ( [ eqn : g1 ] ) can be expressed as where is a constant and . in order to prove ( [ eqn : g1b ] ) , we can use proof by contradiction .that is , we assume that and . then , we examine the -th equation ( from top to bottom ) of ( [ eqn : g1b ] ) and get for , where denotes the -th entry of the matrix .this is because all top rows of are all zeros when as seen from ( [ eqn : g ] ) . again from ( [ eqn : g ] ) , we have where denotes the -th entry of the matrix . for given and for all ,we get .this contradicts with the assumption .therefore , ( [ eqn : g1b ] ) holds and _ step 1 _ is proved .let denote the -th ( ) column of .in order to prove that is independent of , from _ definition [ def2 ] _ it is equivalent to prove that the vector is independent of for all .further , using _ definition [ def1 ] _ it is equivalent to prove that where denotes an vector .equivalently , ( [ eqn : gq ] ) can be expressed as where is a constant and . in order to prove ( [ eqn : gqb ] ), we can use proof by contradiction .we assume that and .note the first row of is all - zero .then , we examine the -th equation(from top to bottom ) of ( [ eqn : gqb ] ) and get for , where denotes the -th entry of the matrix .this is because the -th row of is all - zero when as seen from ( [ eqn : g ] ) . againfrom ( [ eqn : g ] ) , we have where denotes the -th entry of the matrix . for given and for all , we then get .this contradicts with the assumption .therefore , ( [ eqn : gqb ] ) holds and _ step 2 _ is proved .s. m. alamouti , `` a simple transmit diversity technique for wireless communication , '' _ ieee j. sel .areas commun .14511458 , oct .v. tarokh , h. jafarkhani , and a. r. calderbank , `` space - time block codes from orthogonal designs , '' _ ieee trans .inf . theory _14561467 , july 1999 .x. liang , `` orthogonal designs with maximal rates , '' _ ieee trans .inf . theory _49 , pp . 24682503 , oct .2003 .k. lu , s. fu , and x .-xia , `` closed form designs of complex orthogonal space - time block codes of rates for or transmit antennas , '' _ ieee trans .inf . theory _ ,43404347 , dec . 2005 .h. wang and x .-g xia , `` upper bounds of rates of complex orthogonal space - time block codes , '' _ ieee trans .inf . theory _ , vol 49 , pp .27882796 , oct .2003 .o. tirkkonen , a. boariu , and a. hottinen , `` minimal non - orthogonality rate space - time block code for tx antennas , '' in _ proc .ieee 6th int . symp . on spread - spectrum tech . and appl .( isssta 2000 ) _ , sep . 2000 ,. 429432 .n. sharma and c. b. papadias , `` full - rate full - diversity linear quasi - orthogonal space - time codes for any number of transmit antennas , '' _eurasip j. applied signal processing _, vol . 9 , pp .12461256 , aug .2004 .p. elia , k. r. kumar , s. a. pawar , p. v. kumar , and h .- f .lu , `` explicit space - time codes achieving the diversity - multiplexing gain tradeoff , '' _ ieee trans .38693884 , sep .2006 .g. wang , h. liao , h. wang , and x .-xia , `` systematic and optimal cyclotomic lattices and diagonal space - time block code designs , '' _ ieee trans .inf . theory _ , vol .3348 - 3360 , dec . 2004 .g. j. foschini , `` layered space - time architecture for wireless communication in a fading environment when using multi - element antennas , '' _ bell labs tech .vol . 1 , no. 2 , pp . 4159 , 1996 .
|
a partial interference cancellation ( pic ) group decoding based space - time block code ( stbc ) design criterion was recently proposed by guo and xia , where the decoding complexity and the code rate trade - off is dealt when the full diversity is achieved . in this paper , two designs of stbc are proposed for any number of transmit antennas that can obtain full diversity when a pic group decoding ( with a particular grouping scheme ) is applied at receiver . with the pic group decoding and an appropriate grouping scheme for the decoding , the proposed stbc are shown to obtain the same diversity gain as the ml decoding , but have a low decoding complexity . the first proposed stbc is designed with multiple diagonal layers and it can obtain the full diversity for two - layer design with the pic group decoding and the rate is up to symbols per channel use . but with pic - sic group decoding , the first proposed stbc can obtain full diversity for any number of layers and the rate can be full . the second proposed stbc can obtain full diversity and a rate up to with the pic group decoding . some code design examples are given and simulation results show that the newly proposed stbc can well address the rate - performance - complexity tradeoff of the mimo systems . diversity techniques , space - time block codes , linear receiver , partial interference cancellation .
|
state complexity is a type of descriptional complexity based on _ deterministic finite automaton _ ( dfa ) model .the state complexity of an operation on regular languages is the number of states that are necessary and sufficient in the worst case for the minimal , complete dfa that accepts the resulting language of the operation .while many results on the state complexities of individual operations , such as union , intersection , catenation , star , reversal , shuffle , orthogonal catenation , proportional removal , and cyclic shift , have been obtained in the past 15 years , the research of state complexities of combined operations , which was initiated by a. salomaa , k. salomaa , and s. yu in 2007 , is attracting more attention .this is because , in practice , a combination of several individual operations , rather than only one individual operation , is often performed in a certain order .for example , in order to obtain a precise regular expression , a combination of basic operations is usually required . in recent publications , it has been shown that the state complexity of a combined operation is not always a simple mathematical composition of the state complexities of its component operations .this is sometimes due to the structural properties of the dfa accepting the resulting language obtained from a prior operation of a combined operation .for example , the languages that are obtained from performing reversal and reach the upper bound of the state complexity of this operation are accepted by dfas such that half of their states are final ; and the initial state of the dfa accepting a language obtained after performing star is always a final state . as a result , the resulting language obtained from a prior operation may not be among the worst cases of the subsequent operation .since such issues are not concerned by the study of the state complexity of individual operations , they are certainly important in the research of the state complexity of combined operations .although the number of combined operations is unlimited and it is impossible to study the state complexities of all of them , the study on combinations of two individual operations is clearly necessary . in this paper, we study the state complexities of reversal combined with catenation , i.e. , , and star combined with catenation , i.e. , , for minimal complete dfas and of sizes , respectively . for , we will show that the general upper bound , which is close to the composition of the state complexities of reversal and catenation , is reachable when , and it can be lower to and when and and when and , respectively . for , we will show that , if has only one final state and it is also the initial state , i.e. , , the state complexity of catenation ( also ) is , which is lower than that of catenation . in the other cases , that is when contains some final states that are not the initial state , the state complexity of is instead of , the composition of the state complexities of star and catenation . in the next section, we introduce the basic definitions and notations used in the paper .then , we prove our results on reversal combined with catenation and star combined with catenation in sections [ sec : rev - cat ] and [ sec : star - cat ] , respectively .we conclude the paper in section [ sec : conclusion ] .a dfa is denoted by a 5-tuple , where is the finite set of states , is the finite input alphabet , is the state transition function , is the initial state , and is the set of final states .a dfa is said to be complete if is defined for all and .all the dfas we mention in this paper are assumed to be complete .we extend to in the usual way . a _ non - deterministic finite automaton _ ( nfa )is denoted by a 5-tuple , where the definitions of , , , and are the same to those of dfas , but the state transition function is defined as , where denotes the power set of , i.e. the set of all subsets of . in this paper , the state transition function is often extended to . the function is defined by , for and .we just write instead of if there is no confusion .a word is accepted by a finite automaton if .two states in a finite automaton are said to be _ equivalent _ if and only if for every word , if is started in either state with as input , it either accepts in both cases or rejects in both cases .it is well - known that a language which is accepted by an nfa can be accepted by a dfa , and such a language is said to be _ regular_. the language accepted by a dfa is denoted by .the reader may refer to for more details about regular languages and finite automata .the _ state complexity _ of a regular language , denoted by , is the number of states of the minimal complete dfa that accepts .the state complexity of a class of regular languages , denoted by , is the supremum among all , .the state complexity of an operation on regular languages is the state complexity of the resulting languages from the operation as a function of the state complexity of the operand languages .thus , in a certain sense , the state complexity of an operation is a worst - case complexity .in this section , we study the state complexity of for an -state dfa language and an -state dfa language .we first show that the state complexity of is upper bounded by in general ( theorem [ l_1^r l_2 upper bound ] ) .then we prove that this upper bound can be reached when ( theorem [ l_1^r l_2 lower bound ] ) .next , we investigate the case when and and prove the state complexity can be lower to in such a case ( theorem [ l_1^r l_2 state complexity m=1 n>=1 ] ) .finally , we show that the state complexity of is when and ( theorem [ l_1^r l_2 state complexity m>=2 n=1 ] ) .now , we start with a general upper bound of state complexity of for any integers .[ l_1^r l_2 upper bound ] for two integers , let and be two regular languages accepted by an -state dfa and an -state dfa , respectively .then there exists a dfa of at most states that accepts .let be a dfa of states , final states and .let be another dfa of states and .let be an nfa with initial states . where and .clearly , by performing subset construction on nfa , we can get an equivalent , -state dfa such that . since has only one final state , we know that .thus , has final states in total .now we construct a dfa accepting the language , where from the above construction , we can see that all the states in starting with must end with such that .there are in total states which do nt meet this .thus , the number of states of the minimal dfa accepting is no more than this result gives an upper bound for the state complexity of .next we show that this bound is reachable when .[ l_1^r l_2 lower bound ] given two integers , there exists a dfa of states and a dfa of states such that any dfa accepting needs at least states .let be a dfa , shown in figure [ dfam - rev - cat ] , where , , and the transitions are given as : * * * + if , * of theorem [ l_1^r l_2 lower bound ] showing that the upper bound in theorem [ l_1^r l_2 upper bound ] is reachable when let be a dfa , shown in figure [ dfan - rev - cat ] , where , , and the transitions are given as : * * * * of theorem [ l_1^r l_2 lower bound ] showing that the upper bound in theorem [ l_1^r l_2 upper bound ] is reachable when now we design a dfa , where , , , and the transitions are defined as : it is easy to see that is a dfa that accepts .we prove that is minimal before using it .\(i ) we first show that every state , is reachable from .there are three cases .* . if and only if . * .let , . * .let , , . , where \(ii ) any two different states and in are distinguishable . without loss of generality, we may assume that .let .then a string can distinguish these two states because due to ( i ) and ( ii ) , is a minimal dfa with states which accepts .now let be another dfa , where and for each state and each letter as we mentioned in last proof , all the states starting with must end with such that .clearly , accepts the language and it has states .now we show that is a minimal dfa .\(i ) every state is reachable .we consider the following five cases : * , . is the sink state of .* , .let , , .note that , because guarantees . , where please note that when .* , . in this case , let , , . , where * , , . let , , and , , .we can find a string such that , where * , , , .let , , and , , .since is in , according to the definition of , has to be in as well .there exists a string such that , where * , , .let , , and , , .in this case , we have where states and have been proved to be reachable in case 5 .\(ii ) we then show that any two different states and in are distinguishable .* . without loss of generality, we may assume that . let .a string can distinguish them because * , . without loss of generality , we assume that .let .then there always exists a string such that since all the states in are reachable and pairwise distinguishable , dfa is minimal .thus , any dfa accepting needs at least states .this result gives a lower bound for the state complexity of when .it coincides with the upper bound shown in theorem [ l_1^r l_2 upper bound ] exactly .thus , we obtain the state complexity of the combined operation for and .[ l_1^r l_2 state complexity ] for any integers , let be an -state dfa language and be an -state dfa language . then states are both necessary and sufficient in the worst case for a dfa to accept . in the rest of this section , we study the remaining cases when either or .we first consider the case when and . in this case , or . holds no matter is or , since and .it has been shown in that states are both sufficient and necessary in the worst case for a dfa to accept the catenation of a 1-state dfa language and an -state dfa language , . when and , it is also easy to see that state is sufficient and necessary in the worst case for a dfa to accept , because is either or .thus , we have the following theorem concerning the state complexity of for and . [ l_1^r l_2 state complexity m=1 n>=1 ]let be a 1-state dfa language and be an -state dfa language , .then states are both sufficient and necessary in the worst case for a dfa to accept .now , we study the state complexity of for and .let us start with the following upper bound .[ l_1^r l_2 upper bound m>=2 n=1 ] for any integer , let and be two regular languages accepted by an -state dfa and a -state dfa , respectively .then there exists a dfa of at most states that accepts .let be a dfa of states , , final states and .let be another dfa of state and .since is a complete dfa , as we mentioned before , is either or .clearly , .thus , we need to consider only the case .we construct an nfa with initial states which is similar to the proof of theorem [ l_1^r l_2 upper bound ] . if where and .it is easy to see that by performing subset construction on nfa , we get an equivalent , -state dfa such that . because has only one final state .thus , has final states in total .define where , , and for any and , the automaton is exactly the same as except that s final states are made to be sink states and these sink , final states are merged into one , since they are equivalent .when the computation reaches the final state , it remains there .now , it is clear that has states and .this theorem shows an upper bound for the state complexity of for and .next we prove that this upper bound is reachable .[ l_1^r l_2 lower bound m=2 or 3 n=1]given an integer or , there exists an -state dfa and a -state dfa such that any dfa accepting needs at least states .when and .we can construct the following witness dfas .let be a dfa , where , and the transitions are given as : * * let be the dfa accepting .then the resulting dfa for is where * * when and .the witness dfas are as follows .let be a dfa , where , and the transitions are : * * * let be the dfa accepting .the resulting dfa for is where * * * the above result shows that the bound is reachable when is equal to 2 or 3 and .the last case is and .[ l_1^r l_2 lower bound m>=4 n=1 ] given an integer , there exists a dfa of states and a dfa of state such that any dfa accepting needs at least states .let be a dfa , shown in figure [ dfam - rev - cat - n=1 ] , where , , , and the transitions are given as : * * * * of theorem [ l_1^r l_2 lower bound m>=4 n=1 ] showing that the upper bound in theorem [ l_1^r l_2 upper bound m>=2 n=1 ] is reachable when and ] let be the dfa accepting .then .now we design a dfa similar to the proof of theorem [ l_1^r l_2 lower bound ] , where , , , and the transitions are defined as : it is easy to see that is a dfa that accepts .since the transitions of on letters , , and are exactly the same as those of dfa in the proof of theorem [ l_1^r l_2 lower bound ] , we can say that is minimal and it has states , among which states are final .define where , , and for any and , dfa is the same as except that s final states are changed into sink states and merged to one sink , final state , as we did in the proof of theorem [ l_1^r l_2 upper bound m>=2 n=1 ] .clearly , has states and .next we show that is a minimal dfa .\(i ) every state is reachable from .the proof is similar to that of theorem [ l_1^r l_2 lower bound ] .we consider the following four cases : * . * . * .assume that , .note that because all the final states in have been merged into . in this case , * .assume that , , . , where \(ii ) any two different states and in are distinguishable . since is the only final state in , it is inequivalent to any other state .thus , we consider the case when neither of and is . without loss of generality, we may assume that . let . is always greater than because all the states which include have been merged into .then a string can distinguish these two states because since all the states in are reachable and pairwise distinguishable , is a minimal dfa .thus , any dfa accepting needs at least states . after summarizing theorem [ l_1^r l_2 upper bound m>=2 n=1 ] , theorem [ l_1^r l_2lower bound m>=4 n=1 ] and lemma [ l_1^r l_2 lower bound m=2 or 3 n=1 ] , we obtain the state complexity of the combined operation for and .[ l_1^r l_2 state complexity m>=2 n=1 ] for any integer ,let be an -state dfa language and be a -state dfa language .then states are both sufficient and necessary in the worst case for a dfa to accept .in this section , we investigate the state complexity of for two dfas and of sizes , respectively .we first notice that , when , the state complexity of is 1 for any .this is because is complete ( is either or ) , and we have either or .thus , is always accepted by a 1 state dfa .next , we consider the case where has only one final state and it is also the initial state . in such a case , is also accepted by , and hence the state complexity of is equal to that of .we will show that , for any of size in this form and any of size , the state complexity of ( also ) is ( theorems [ thm : star - cat - upper - special ] and [ thm : star - cat - lower - special ] ) , which is lower than the state complexity of catenation in the general case .lastly , we consider the state complexity of in the remaining case , that is when has at least a final state that is not the initial state and . we will show that its upper bound ( theorem [ thm : star - cat - upper ] ) coincides with its lower bound ( theorem [ thm : star - cat - lower ] ) , and the state complexity is .now , we consider the case where dfa has only one final state and it is also the initial state , and first obtain the following upper bound of the state complexity of ( ) , for any dfa of size .[ thm : star - cat - upper - special ] for integers and , let and be two dfas with and states , respectively , where has only one final state and it is also the initial state .then , there exists a dfa of at most states that accepts , which is equal to .let and .we construct a dfa such that intuitively , contains the pairs whose first component is a state of and second component is a subset of . since is the final state of , without reading any letter , we can enter the initial state of .thus , states such that can never be reached in , because is complete . moreover, does not contain those states whose first component is and second component does not contain . clearly , has states , and we can verify that .next , we show that this upper bound can be reached by some witness dfas in the specific form . for theorem [ thm : star - cat - lower - special ] when ] for theorem [ thm : star - cat - lower - special ] when ] [ thm : star - cat - lower - special ] for any integers and , there exist a dfa of states and a dfa of states , where has only one final state and it is also the initial state , such that any dfa accepting the language , which is equal to , needs at least states . when , the witness dfas used in the proof of theorem 1 in can be used to show that the upper bound proposed in theorem [ thm : star - cat - upper - special ] can be reachednext , we consider the case when .we provide witness dfas and , depicted in figures [ fig : dfaa - star - cat - special ] and [ fig : dfab - star - cat - special ] , respectively , over the three letter alphabet . is defined as where , and the transitions are given as * , for , * , for , where . is defined as where , where the transitions are given as * , for , * , for , * , , for . following the construction described in the proof of theorem [ thm : star - cat - upper - special ] ,we construct a dfa that accepts ( also ) . to prove that is minimal , we show that ( i ) all the states in are reachable from , and ( ii ) any two different states in are not equivalent . for ( i ), we show that all the state in are reachable by induction on the size of .the basis clearly holds , since , for any , state is reachable from by reading string , and state can be reached from state on string , for any and . in the induction steps, we assume that all the states such that are reachable .then , we consider the states where .let such that .we consider the following three cases : 1 . and . for any state , state can be reached as where is of size .2 . and . for any state , state can be reached from state by reading string .3 . . in such a case, the first component of state can not be .thus , for any state , state can be reached from state by reading string .next , we show that any two distinct states and in are not equivalent .we consider the following two cases : 1 . . without loss of generality , we assume . then , string can distinguish the two states , since and .2 . and . without loss of generality, we assume that . then , there exists a state .it is clear that , when , string can distinguish the two states , and when , string can distinguish the two states since can not be . due to( i ) and ( ii ) , dfa needs at least states and is minimal . in the rest of this section , we focus on the case where dfa contains at least one final state that is not the initial statethus , this dfa is of size at least 2 .we first obtain the following upper bound for the state complexity .[ thm : star - cat - upper ] let be a dfa such that and , and be a dfa such that .then , there exists a dfa of at most states that accepts .we denote by .then , .we construct a dfa for the language , where and are the languages accepted by dfas and , respectively .let , where the initial state is .the set of final states is defined to be .the transition relation is defined as follows : where , , , and .intuitively , is equivalent to the nfa obtained by first constructing an nfa that accepts , then catenating this new nfa with dfa by -transitions .note that , in the construction of , we need to add a new initial and final state . however , this new state does not appear in the first component of any of the states in .the reason is as follows .first , note that this new state does not have any incoming transitions .thus , from the initial state of , after reading a nonempty word , we will never return to this state . as a result ,states such that , , and is never reached in dfa except for the state .then , we note that , in the construction of , states and should reach the same state on any letter in .thus , we can say that states and are equivalent , because either of them is final if , and they are both final states otherwise .hence , we merge this two states and let be the initial state of . also , we notice that states such that can never be reached in , because is complete .moreover , does not contain those states whose first component contains a final state of and whose second component does not contain the initial state of .therefore , we can verify that dfa indeed accepts , and it is clear that the size of is then , we show that this upper bound is reachable by some witness dfas . for theorem [ thm : star - cat - lower ] ] for theorem [ thm : star - cat - lower ] ] [ thm : star - cat - lower ] for any integers , there exist a dfa of states and a dfa of states such that any dfa accepting needs at least states .we define the following two automata over a four letter alphabet .let , shown in figure [ fig : dfaa - star - cat ] , where , and the transitions are defined as * , for , * , , for , * , for , .let , shown in figure [ fig : dfab - star - cat ] , where , and the transitions are defined as * , for , , * , for , * , for .let be the dfa accepting the language which is constructed from and exactly as described in the proof of theorem [ thm : star - cat - upper ] .now , we prove that the size of is minimal by showing that ( i ) any state in can be reached from the initial state , and ( ii ) no two different states in are equivalent .we first prove ( i ) by induction on the size of the second component of the states in .* basis : * for any , state can be reached from the initial state on string .then , by the proof of theorem 5 in , it is clear that state of , where and , is reachable from state on strings over letters and . *induction step : * assume that all the states in such that and are reachable .then , we consider the states in where and .let such that .note that states such that and are reachable as follows : then , states such that and can be reached as follows : once again , by using the proof of theorem 5 in , states in , where and , can be reached from the state on strings over letters and .next , we show that any two states in are not equivalent .let and be two different states in .we consider the following two cases : 1 . . without loss of generality , we assume .then , there exists a state .it is clear that string is accepted by starting from state , but it is not accepted starting from state .2 . and .we may assume that and let .then , state reaches a final state on string , but state does not on the same string .note that , when , we can say that .due to ( i ) and( ii ) , dfa has at least reachable states , and any two of them are not equivalent .in this paper , we have studied the state complexities of two combined operations : reversal combined with catenation and star combined with catenation .we showed that , due to the structural properties of dfas obtained from reversal and star , the state complexities of these two combined operations are not equal but close to the mathematical compositions of the state complexities of their individual participating operations .9 c. campeanu , k. culik , k. salomaa , s. yu : state complexity of basic operations on finite language , in : _ proceedings of the fourth international workshop on implementing automata viii _ 1 - 11 , lncs 2214 , 1999 , 60 - 70 m. holzer , m. kutrib : state complexity of basic operations on nondeterministic finite automata , in : _ proceedings of international conference on implementation and application of automata _ 2002 , lncs 2608 , 2002 , 148 - 157
|
in this paper , we show that , due to the structural properties of the resulting automaton obtained from a prior operation , the state complexity of a combined operation may not be equal but close to the mathematical composition of the state complexities of its component operations . in particular , we provide two witness combined operations : reversal combined with catenation and star combined with catenation .
|
the image formation process is an inverse problem that can be modeled as the following linear system where is the non - negative observed data , represents an ideal , undistorted image to be recovered , is a typically ill - conditioned matrix describing the blurring effect , is a known non - negative background radiation and is the noise corrupting the data .a typical assumption for the matrix is that it has non - negative elements and each row and column has at least one positive entry . because of the ill - conditioning affecting the problem and the presence of noise on the measured data , a trivial approach that seeks the solution of is in general not successful ; thus , alternative strategies must be exploited .variational approaches to image restoration suggest to recover the unknown object through iterative schemes suited for the following constrained minimization problem where is a continuously differentiable convex function measuring the difference between the model and the data .the definition of the function depends on the noise type introduced by the acquisition system .particularly , in the case of additive white gaussian noise the cost function is characterized by a least squares distance of the form while , when the data are affected by poisson noise , the so - called kullback - leibler ( kl ) divergence is used : where we assume that and , . in both cases , taking into account also the assumptions on , we may observe that is non - negative , convex and coercive on the non - negative orthant , which means that problem has global solutions . moreover , if the equation has only the solution , then is strictly convex , while the same conclusion holds for if the additional condition , , is satisfied . in these settings, the strict convexity of implies that the solution of is unique .+ due to the ill - posedness of the image restoration problem , one is not interested in computing the minimum points of in or because the exact solution of does not provide a sensible estimate of the unknown image .for this reason , iterative minimization methods are usually exploited to obtain acceptable solutions by arresting the algorithm before convergence through some stopping criteria , as the classic morozov s discrepancy principle in the case of gaussian noise ( see e.g. ) or some recently proposed strategies for poisson data .+ another technique to tackle to this problem requires to exactly solve the following optimization problem where is a regularization term adding a priori information on the solution and is a positive parameter balancing the role of the two objective function components and .a frequently used function for the regularization term is a smooth approximation of the total variation , also known in the literature as _ hypersurface potential _ ( hs ) , defined as where the discrete gradient operator is set through the standard finite difference with periodic boundary conditions when and is one of the two considered cost functions , the objective function in is non - negative , strictly convex and coercive on the non - negative orthant .it follows that problem has a unique solution .+ both formulations of the imaging problem require an effective optimization method able to provide a meaningful solution in a reasonable time . among all possible choices ,first - order methods are particularly suited to deal with this kind of problems for several reasons .first , due to the large size of the images ( which becomes a crucial issue especially in 3d applications ) , the handling of the hessian matrix is an impractical task .then , first - order methods are used to quickly achieve solutions with low / medium accuracy , which is a general requirement in imaging problems .finally , when the optimization scheme is used as iterative regularization method to minimize the cost function , an excessively fast convergence makes the automatic choice of the stopping iteration a crucial issue , since a difference of few iterations from the one providing the best reconstruction can lead to substantial differences in the final images .+ in this paper we extend to the case of a general scaled gradient projection method a steplength selection rule recently proposed by fletcher in the unconstrained optimization framework and we test its effectiveness in image deblurring problems .this rule is based on the estimate of some eigenvalues of the hessian matrix which , for quadratic problems , can be achieved by means of a lanczos process applied to a certain number of consecutive gradients . since the scheme depends only on these stored gradients , it can be generalized to nonquadratic objective functions , showing very competitive results in several benchmark problems with respect to other first - order and quasi - newton methods .the extension to scaled gradient projection methods applied to non - negatively constrained problems requires a generalization of the matrix with the last gradients accounting for the presence of both the scaling matrix multiplying the gradient and the projection on the non - negative orthant .the resulting scheme consists in the storage of a set of scaled gradients ( instead of the usual ones ) in which some components of the gradients themselves are put equal to zero .our numerical experiments on the non - negative minimization of the ls distance and the kl divergence show that the proposed approach is able to compete with standard gradient methods and other recently proposed schemes , providing in some cases good reconstructions with a significantly lower number of iterations .+ the plan of the paper is the following : in section [ sec2 ] we recall the features of a scaled gradient projection method and , in particular , of the scaling matrix multiplying the gradient . in section [ sec3 ]we focus the analysis on the choice of the steplength parameter and we describe state - of - the - art strategies and our proposed rule . in section [ sec4 ] some numerical experiments on small quadratic programming ( qp ) andimage deblurring least - squares problems are presented , while in section [ sec5 ] we address the image deblurring problem with data perturbed with poisson noise also by adding an edge - preserving regularization term in the objective function .some ideas on a possible generalization of the proposed rule to different constraints are provided in section [ sec6 ] , together with a numerical test on the rudin - osher - fatemi model .our conclusions are given in section [ sec7 ] .a general scaled gradient projection ( sgp ) method for the solution of with differentiable function , is an iterative algorithm whose -th iteration is defined by where * ; * is the gradient of the objective function at iteration ; * ] , with ; * is a symmetric and positive definite scaling matrix with eigenvalues lying in a fixed positive interval ] , , is tridiagonal .taking into account equation and that the columns of are in the space generated by the above krylov sequence , we have , where is upper triangular and nonsingular , assuming is full - rank .it follows from that the tridiagonal matrix can be written as \gamma r^{-1}\ ] ] and , by introducing the vector , that is the vector that solves the linear system , we obtain \gamma r^{-1}.\ ] ] the eigenvalues of the tridiagonal matrix , called ritz values , are approximations of eigenvalues of and , since is the hessian matrix of the objective function , they give some second order information about problem . the steplength selection rule proposed by fletcher consists in exploiting the reciprocal of the ritz values as steplengths in the next iterations .we refer to for a detailed motivation of this steplength rule and we focus on the features crucial for the extension of the rule to nonquadratic objective functions and to constrained optimization problems .first of all we remark that allows one to obtain the matrix by simply exploiting the partially extended cholesky factorization = r^t [ r \quad { \boldsymbol{r}}],\ ] ] without the explicit use of the matrices and .this is important both for the computational point of view and for the extension to nonquadratic functions . for a general objective function, is upper hessenberg and the ritz - like values are obtained by computing the eigenvalues of a symmetric and tridiagonal approximation of defined as where and denote the diagonal and the strictly lower triangular parts of a matrix .possible negative eigenvalues of the resulting matrix are discarded before using this set of steplengths for the next iterations .several numerical experiments , for both quadratic and nonquadratic test problems , demonstrate that this new steplength selection rule is able to improve the convergence rate of steepest descent methods with respect to other , often used , possibilities for choosing the steplength .+ motivated by these promising results and taking into account that the convergence for the scaled gradient projection method is guaranteed for every choice of the steplength in a bounded interval , we tried to exploit the fletcher s steplength selection rule in the algorithms used for constrained optimization . in the extension of the original scheme to the sgp method ,the main change is the definition of a new matrix that generalizes the matrix in . in particular, we have to consider two fundamental elements : the presence of the scaling matrix multiplying the gradient direction and the projection onto the feasible set . as concerns the former issue, we exploit the remark that each scaled gradient iteration can be viewed as a usual gradient iteration applied to a scaled objective function by means of a transformation of variables of the type , where the notation indicates the square root matrix of . this idea led us to store at each iteration the scaled gradient instead of .the non - negativity constraint is addressed by looking at the complementarity condition of the kkt optimality criteria , for which the components of the gradient related to inactive constraints in the solution have to vanish . to this aim, we emphasized the minimization over these components by storing the vectors whose -th entry is given by & { \rm{if } } \x^{(k)}_j > 0 .\end{cases}\ ] ] driven by the previous considerations , our implementation of fletcher s rule for the constrained case is based on the following choice for the matrix : .\ ] ] as concerns the computational cost of the steplength derivation , each group of iterations ( called _ sweep _ in ) requires the computation of the scaled gradients and the symmetric matrix , which can be performed with vector - vector products .since is typically a very small number ( between 3 and 5 ) , the cholesky factorization of and the solution of the linear system are straightforward .it is worth noting that the computation of either the bb1 or the bb2 steplength for iterations needs 3 vector - vector products .therefore , if we assume for example , then both the generalization of the limited memory approach and each bb steplength can be computed in products , while the computational cost grows up to for any alternating strategy of the two bb rules .+ in the next sections we present the benefits that can be gained by using the steplength selection rule based on the ritz values adapted to the constrained optimization in the image reconstruction framework .in this section we report the results of several numerical experiments we carried out on constrained qp problems in order to validate the efficacy of the limited memory selection rule .first we show few tests on the minimization of a quadratic function of 20 variables , with the analysis of the behaviour of three steplengths when varying some features of the optimization problem . then we present realistic experiments of imaging problems with a comparison of several scaled and nonscaled gradient projection methods .all the numerical experiments have been performed by means of routines implemented by ourselves in matlab r2010a and run on a pc equipped with a 1.60 ghz intel core i7 in a windows 7 environment .the aim of this section is to investigate possible dependencies of the results provided by a ( s)gp method with different steplengths on the features of the quadratic problem to be addressed , as the distribution of the eigenvalues of the hessian matrix , the number of active constraints and the condition number .therefore , we built up some ad hoc tests to evaluate different selection rules for different choices of these parameters of the problem .in particular , we consider the minimization problem where : * we chose a vector and we defined the matrix as , where is an orthogonal matrix obtained by a qr factorization of a random matrix ; * we defined randomly the set of active constraints ; * we defined the vector of lagrange multipliers by setting if and if . in a similar way , we defined the solution of the problem by setting if and random in if ; * we defined the vector .the generalization of the limited memory ( ritz ) steplength to the constrained case has been compared to the abb and bb1 values , where in the former case we used the generalized adaptive alternation rule proposed in .for all the three algorithms we exploited both a monotone and a nonmonotone linesearch to determine the parameter . in the latter case ,the sufficient decrease at each iteration is evaluated with respect to the maximum of the objective function on the last iterations . in the limited memory rule ,the number of back stored gradient has been set equal to 3 .following , we started by considering and we investigated possible choices of the scaling matrix for the minimization problem .the number of active constraints has been set equal to 8 .we remark that , since in our tests has also negative entries , the scaling matrix provided by the splitting of in section 2 is not applicable .possible scaling matrices are given by : * the inverse of the diagonal of : , which for the quadratic case is equivalent to apply a nonscaled gradient projection method to a preconditioned version of the minimization problem ; * the scaling matrix proposed by coleman and li for interior trust region approaches applied to nonlinear minimization problems subject to box constraints : , where if and if ; * the current iteration : .the diagonal entries of all the scaling matrices have been projected in the range $ ] to guarantee the convergence of the schemes . in order to avoid the dependency of the analysis on the stopping criterion used , in table [ tabg1 ] we reported the number of iterations required by the different algorithms to reach a relative reconstruction error ( rre ) lower than prefixed thresholds ( e.g. , , , ) .the performances with the trivial scaling matrix are also reported .+ .numbers of iterations required by sgp equipped with the limited memory ( ritz ) , abb and bb1 steplengths to reach rres lower than , and for different scaling matrices ( see text ) .the results obtained with a monotone ( ) and nonmonotone ( ) linesearch are reported .the asterisk denotes the maximum number of iterations allowed .[ cols="^,^,^,^,^,^,^,^ " , ] and the minimum value provided by the different methods for the test problem g. ] the results presented both in table [ tab4 ] and in figure [ f_fmin2 ] confirm the goodness of the suggested limited memory steplength selection scheme for a gradient projection method with respect to standard approaches also for a constrained optimization problem where the feasible set is different from the simple non - negative orthant .in this paper we considered a first - order method for the minimization of non - negatively constrained optimization problems arising in the image reconstruction field , and we introduced a new strategy for the steplength selection which generalizes a rule recently proposed in the unconstrained optimization framework .the steplength value is based on the storage of a limited number of consecutive objective function gradients and we showed how it can be extended to account for the presence of both a scaling matrix multiplying the gradient of the objective function and a non - negative constraint on the pixels of the unknown image .we first tested our rule in the minimization of a quadratic function with different features , and we showed that the limited memory steplength is extremely competitive with respect to state - of - the - art bb - like choices .similar conclusions can be drawn by the numerical experiments we carried out on image reconstruction problems where the measured images are affected by either gaussian or poisson noise .a final test on the rof model showed the potentiality of the proposed rule also in optimization problems with different constraints .+ thanks to the significant reduction of the iterations achievable by the proposed steplength , in our future work we will consider the application of our new scheme to real - world imaging problems , as the reconstruction of x - ray images of solar flares starting from the emitted radiation and the deblurring of conventional stimulated emission depletion ( sted ) microscopy images of sub - cellular structures in fixed cells .moreover , the proposed rule will be tested also within a sgp method where the sequence of scaling matrices converges to the identity , since in this case strong convergence results have been recently proved under mild convexity assumptions .this work has been partially supported by the italian spinner 2013 phd project `` high - complexity inverse problems in biomedical applications and social systems '' and by miur ( italian ministry for university and research ) , under the projects firb - futuro in ricerca 2012 , contract rbfr12m3ac , and prin 2012 , contract 2012mte38n .the italian gncs - indam ( gruppo nazionale per il calcolo scientifico - istituto nazionale di alta matematica ) is also acknowledged .40 acar , r. , vogel , c.r . :analysis of bounded variation penalty methods for ill - posed problems .inverse probl .10(6 ) , 12171229 ( 2004 ) bardsley , j.m . ,goldes , j. : regularization parameter selection methods for ill - posed poisson maximum likelihood estimation .inverse probl .25(9 ) , 095005 ( 2009 ) barzilai , j. , borwein , j.m . :two - point step size gradient methods .i m a j. numer .8(1 ) , 141148 ( 1988 ) bertero , m. , boccacci , p. , talenti , g. , zanella , r. , zanni , l. : a discrepancy principle for poisson data .inverse probl .26(10 ) , 105004 ( 2010 ) bertero , m. , lantri , h. , zanni , l. : iterative image reconstruction : a point of view . in : censor , y. , jiang , m. , louis , a.k .mathematical methods in biomedical imaging and intensity - modulated radiation therapy , pp .edizioni della normale , pisa ( 2008 ) bertsekas , d. : nonlinear programming .athena scientific , belmont ( 1999 ) bertsekas , d. : convex optimization theory .supplementary chapter 6 on convex optimization algorithms , 2 december 2013 edn .athena scientific , belmont ( 2009 ) birgin , e.g. , martinez , j.m ., raydan , m. : inexact spectral projected gradient methods on convex sets .i m a j. numer .23(4 ) , 539559 ( 2003 ) bonettini , s. , landi , g. , loli piccolomini , e. , zanni , l. : scaling techniques for gradient projection - type methods in astronomical image deblurring .j. comput .90(1 ) , 929 ( 2013 ) bonettini , s. , prato , m. : nonnegative image reconstruction from sparse fourier data : a new deconvolution algorithm .inverse probl .26(9 ) , 095001 ( 2010 ) bonettini , s. , prato , m. : accelerated gradient methods for the x - ray imaging of solar flares .inverse probl .30(5 ) , 055004 ( 2014 ) bonettini , s. , prato , m. : a new general framework for gradient projection methods .arxiv e - prints , 1406.6601 ( 2014 ) bonettini , s. , ruggiero , v. : an alternating extragradient method for total variation based image restoration from poisson data .inverse probl .27(9 ) , 095001 ( 2011 ) bonettini , s. , ruggiero , v. : on the convergence of primal - dual hybrid gradient algorithms for total variation image restoration . j. math .imaging vis .44(3 ) , 236253 ( 2012 ) bonettini , s. , zanella , r. , zanni , l. : a scaled gradient projection method for constrained image deblurring .inverse probl .25(1 ) , 015002 ( 2009 ) carlavan m. , blanc - fraud l. : regularizing parameter estimation for poisson noisy image restoration . international icst workshop on new computational methods for inverse problems , may 2011 , paris , france .chambolle , a. : an algorithm for total variation minimization and applications .imaging vis .20(12 ) , 8997 ( 2004 ) chambolle , a. , pock , t. : a first - order primal - dual algorithm for convex problems with applications to imaging . j. math .imaging vis .40(1 ) , 120145 ( 2011 ) coleman , t.f . , li , y. : an interior trust region approach for nonlinear minimization subject to bounds .siam j. optim .6(2 ) , 418445 ( 1996 ) cornelio , a. , porta , f. , prato , m. , zanni , l. : on the filtering effect of iterative regularization algorithms for discrete inverse problems .inverse probl .29(12 ) , 125013 ( 2013 ) dai , y.h . , yuan , y.x . : alternate minimization gradient method .i m a j. numer .23(3 ) , 377393 ( 2003 ) daube - witherspoon , m.e . ,muehllener , g. : an iterative image space reconstruction algorithm suitable for volume ect .ieee t. med .imaging 5(2 ) , 6166 ( 1986 ) de asmundis , r. , di serafino , d. , riccio , f. , toraldo , g. : on spectral properties of steepest descent methods .i m a j. numer .33(4 ) , 14161435 ( 2013 ) de asmundis , r. , di serafino , d. , hager , w.w . , toraldo , g. , zhang , h. : an efficient gradient method using the yuan steplength .59(3 ) , 541563 ( 2014 ) fletcher , r. : a limited memory steepest descent method . math . program .135(12 ) , 413436 ( 2012 ) frassoldati , g. , zanghirati , g. , zanni , l. : new adaptive stepsize selections in gradient methods . j. ind . manage4(2 ) , 299312 ( 2008 ) golub , g.h . , van loan , c.f . : matrix computations , 3rd edn .john hopkins university press , baltimore ( 1996 ) grippo , l. , lampariello , f. , lucidi , s. : a nonmonotone line search technique for newton s method .siam j. numer .23(4 ) , 707716 ( 1986 ) hansen , p.c . : rank - deficient and discrete ill - posed problems .siam , philadelphia ( 1997 ) hansen , p.c . ,nagy , j.g . ,oleary , d.p . : deblurring images : matrices , spectra and filtering .siam , philadelphia ( 2006 ) harmany , z.t ., marcia , r.f . , willett , r.m .: this is spiral - tap : sparse poisson intensity reconstruction algorithms theory and practice . ieee t. image process .3(21 ) , 10841096 ( 2012 ) lantri , h. , roche , m. , aime , c. : penalized maximum likelihood image restoration with positivity constraints : multiplicative algorithms .inverse probl .18(5 ) , 13971419 ( 2002 ) lantri , h. , roche , m. , cuevas , o. , aime , c. : a general method to devise maximum likelihood signal restoration multiplicative algorithms with non - negativity constraints . signal process .81(5 ) , 945974 ( 2001 ) lucy , l. : an iterative technique for the rectification of observed distributions .j. 79(6 ) , 745754 ( 1974 ) nocedal , j. , wright , s.j . : numerical optimization , 2nd edn .springer , new york ( 2006 ) porta , f. , zanella , r. , zanghirati , g. , zanni , l. : limited - memory scaled gradient projection methods for real - time image deconvolution in microscopy . commun .nonlinear sci .21 , 112127 ( 2015 ) prato , m. , cavicchioli , r. , zanni , l. , boccacci , p. , bertero , m. : efficient deconvolution methods for astronomical imaging : algorithms and idl - gpu codes .539 , a133 ( 2012 ) prato , m. , la camera , a. , bonettini , s. , bertero , m. : a convergent blind deconvolution method for post - adaptive - optics astronomical imaging .inverse probl .29(6 ) , 065017 ( 2013 ) richardson , w.h . :bayesian based iterative method of image restoration .62(1 ) , 5559 ( 1972 ) rudin , l. , osher , s. , fatemi , e. : nonlinear total variation based noise removal algorithms .physica d 60(14 ) , 259268 ( 1992 ) ruggiero , v. , zanni , l. : a modified projection algorithm for large strictly - convex quadratic programs .j. optimiz .theory app .104(2 ) , 281299 ( 2000 ) setzer , s. , steidl , g. , teuber , t. : deblurring poissonian images by split bregman techniques. j. vis . commun .image r. 21(3 ) , 193199 ( 2010 ) vogel , c.r .: computational methods for inverse problems .siam , philadelphia ( 2002 ) yuan , y. : a new stepsize for the steepest descent method . j. comp .24 , 149156 ( 2006 ) zanella , r. , boccacci , p. , zanni , l. , bertero , m. : efficient gradient projection methods for edge - preserving removal of poisson noise .inverse probl .25(4 ) , 045010 ( 2009 ) zanella , r. , zanghirati , g. , cavicchioli , r. , zanni , l. , boccacci , p. , bertero , m. , vicidomini , g. : towards real - time image deconvolution : application to confocal and sted microscopy .rep . 3 , 2523 ( 2013 ) zhou , b. , gao , l. , dai , y.h .: gradient methods with adaptive step - sizes .35(1 ) , 6986 ( 2006 ) zhu , m. , wright , s.j . , chan , t.f . : duality - based algorithms for total - variation - regularized image restoration . comput47(3 ) , 377400 ( 2008 )
|
gradient methods are frequently used in large scale image deblurring problems since they avoid the onerous computation of the hessian matrix of the objective function . second order information is typically sought by a clever choice of the steplength parameter defining the descent direction , as in the case of the well - known barzilai and borwein rules . in a recent paper , a strategy for the steplength selection approximating the inverse of some eigenvalues of the hessian matrix has been proposed for gradient methods applied to unconstrained minimization problems . in the quadratic case , this approach is based on a lanczos process applied every iterations to the matrix of the gradients computed in the previous iterations , but the idea can be extended to a general objective function . in this paper we extend this rule to the case of scaled gradient projection methods applied to constrained minimization problems , and we test the effectiveness of the proposed strategy in image deblurring problems in both the presence and the absence of an explicit edge - preserving regularization term .
|
the theory of the phase - ordering dynamics following a rapid cooling down ( or quenching ) through the critical temperature from a homogeneous or disordered phase into an inhomogeneous or ordered state has been studied for decades .this phenomenon is known as spinodal decomposition .part of the fascination of the field is because the ordering does not occur instantaneously .instead , a network of domains of the equilibrium phases develops , and the typical length scale associated with these domains increases with the time . in order words , the length scale of the ordered regions growth with time as the different ( broken - symmetry ) phases compete in order to achieve the equilibrium state .one of the leading models devised for the theoretical study of this phenomenon is based on the cahn - hilliard formulation .the cahn - hilliard ( ch ) theory was originally proposed to model the quenching of binary alloys through the critical temperature but it has subsequently been adopted to model many other physical systems which go through a similar phase separation .recently it has been discovered that high critical temperature superconductors have an intrinsic inhomogeneous phase which exhibit patternings which involve nanoscale regions of phase separations , often referred as stripes as revealed by neutron diffraction studies and exafs .similar findings were measured by very fine scanning tunneling microscopy / spectroscopy ( stm / s ) data which have shown the local charge and the superconducting gap spatial variation through the differential conductance .the ubiquity of such intrinsic inhomogeneities , as well as the importance of the materials that exhibit them has motivated intense experimental and theoretical research into the details of the phenomena .recently a theory of the critical field based on a distribution of superconducting regions with different critical temperatures due to these intrinsic inhomogeneities has explained some nonconventional features of current data on some cuprates .the manganites which exhibit a colossal magnetoresistance have also some properties which may be linked with clusters formation upon cooling or an intrinsic phase separation .researches on these materials are currently been investigated by a sizable fraction the condensed matter community .therefore , studies based on the ch equation may be useful to understand the puzzle of the origins of such intrinsic phase separation in these materials and how it affects their physical properties . herewe want to study specifically the cahn - hilliard equation and we will deal with the problem of phase separation in high- superconductors in another publication . in an alloy system composed by a binary mixturewe can define the local phase variable as the difference in concentration between the two ( incompressible ) components or simply , the concentration of one of the components over the domain ( ) which represents the system .it is clear that this type of phase variable or order parameter is always conserved for an isolated system and we will use below this property as the main guide for our numerical method . in the ch theory , the time variation of the order parameter given in terms of the functional derivative of a time - dependent free - energy functional leading to an equation of motion to diffusive transport of the order parameter , namely where is the free - energy density and the ( constant ) mobility will hereafter be absorbed into the time scale although there are cases in which the mobility can be a function of the position .the free - energy functional is assumed , to most of the physical applications , to follow the ginzburg - landau ( gl ) form : where the potential may have a double - well structure like , for instance , as many authors have used and , for the sake of comparison with these previous works , we will adopt it below .there are other possibilities like which is more convenient for physical applications since , usually the gl free energy is a power expansion in with coefficients ( , , ... ) which depend on the temperature , the applied field , and on other physical properties .it is easy to see that the above double well potential will favor two phases with densities .if one uses a potential with three minima , it will appear three major phases and so on .bray pointed out that one can explore the fact that the order parameter is conserved and the ch equation can be written in the form of a continuity equation , , with the current . therefore we may write the ch equation as following , normally , because the fourth derivative term requires a large stencil and it is related with the size of the interface between regions of two different phase . to accurately resolve these interfacesa fine space discretization is necessary .the linear term is responsible for the interesting dynamics including the instability of constant solutions near and the nonlinear term is the one which mainly stabilizes the flow .as it has already been pointed out , both the and the nonlinear term make the ch equation very stiff and it is difficult to solve it numerically . the nonlinear term in principle , forbids the use of common fast fourier transform ( fft ) methods and brings the additional problem that the usual stability analysis like von newmann criteria can not be used .these difficulties make most of the finite difference schemes to use time steps of many order of magnitude smaller than and consequently , it is numerical expensive to reach the time scales where the interesting dynamics occur .this is the reason why numerical simulations based on runge - kutta schemes had to be performed in large supercomputers . in order to deal with such restrictions ,eyre developed a semi - implicit method which resolves the problems associated with both the stiffness and solvability .furthermore eyre proved that his algorithms are unconditionally gradient stable for the ch and also for the allen - cahn equation , which means that the free energy does not increase with time . both gradient stability and the conservation of massprovide us with a simple and rational form to establish the stability criterion for the ch equation that replaced the von neumann stability criteria .furthermore , mostly either finite difference calculations or monte carlo simulations uses periodic boundary conditions as a tentative to mimic a large system and we show here that , as concerns phase separation , that free boundary conditions are less stiff and more faster achieved .the goal of this paper is to make a combinations of a systematic study of the ch equation in 1d , 2d and 3d using a simplification of the eyre s method and free boundary conditions . to give a better perspective to the approach ,we make also calculations with a crank - nicholson like ( cn ) implicitly scheme , which is unconditionally convergent in 1d and , using the concept of alternating direction interaction ( adi ) method , in 2d .these calculations demonstrate the advantage of the eyre s method over the cn scheme .therefore we apply eyre s method in 3d and study the phenomenon of spinodal decomposition in three dimensions .the application to the high superconductors and manganites with the study of the relevant parameters to their phase diagrams is under current investigation and will be discussed in a future work .in order to solve numerically eq.([eqch ] ) by means of a finite difference scheme , we need an initial condition over the entire domain , , which usually is a random function or some small fluctuations over a specific average and also some type of boundary conditions ( bcs ) . during our simulationswe have found that these initial conditions , the bcs and the size of the system greatly influence the solution .the general convenient and flux - conserving boundary conditions are where is the outward normal vector on the boundary of the domain which we represent by .these two equations together are equivalent to these bcs lead to the two very important properties which will be our guide to know whether our ch numerical solutions are convergent , namely , the conservation of the total mass of the system ; {\vec x\in\partial\omega}=0 \label{eqcm}\end{aligned}\ ] ] and the dissipation or decrease of the total energy ; ^ 2d\vec x\le 0 \label{eqde}\end{aligned}\ ] ] this last equation shows that appropriate solutions of the ch equation must dissipate energy and this is called gradient flow . therefore a time stepping finite difference scheme is defined to be _ gradient stable _ only if the free energy does not increase with the time , i.e. , obeys eq.([eqde ] ) .since it is not convenient to use the von neumann stability analysis , gradient stability is regarded as the best stability criterion for finite difference numerical solutions of the ch equation .furthermore , unconditional gradient stability means that the conditions for gradient stability is satisfied for any size of time step and this will be our guide through the simulations .this way to examine the stability of a central difference scheme by following the energy was already proposed to nonlinear problems a long time ago by park . we should mention that a similar analysis , based also on the eyre s approach , performing numerical tests of stability and with a very complete classification scheme for the stable values of for the ch and allen - cahn equation in 2d was recently developed .eyre has proposed a semi - implicit method that is unconditional gradient stable if the is the usual two minima potential used in a typical gl free energy and can be divided in two parts : where is called contractive and is called expansive .he showed that it is possible to achieve unconditional gradient stability if one treats the contractive part implicitly and the expansive part explicitly . in our case , since we are using , we have : to implement eyre s scheme we define to be the approximation to at location , , and , where , and . is the linear size of the system , assuming to be cubic , for simplicity . with these definitions , we can write the method for the ch equation as in this equation the standard centered difference approximation of the 3d laplacian operator is which is second order in the spatial step .the eq.([eqche ] ) represents a large coupled set of nonlinear equations due to the cubic term .the way to go around this problem is to splitting or to linearize it at every time step .consequently the term is transformed into leading the eq.([eqche ] ) into a set of linear equations in the step of time .it has been argued that this nonlinear splitting has the smallest local truncation error and therefore it will be the scheme adopted in this work .then we finally obtain the proposed finite difference scheme for the ch equation which is linear ( in the above sense ) , namely , or , separating in different times to see the semi - implicit character of the approach , as noted by furihata et al , the discrete associated boundary conditions equivalent to eq.([eqbc ] ) , second order in the spatial variable , become and similar equations for the boundary conditions over the second and third indices representing the and -directions , respectively . noticethat , differently than most numerical simulations applied to physical systems , these boundary conditions are not periodical .imposing periodical bcs which is common in physical applications and which is largely used to artificially simulate a larger system , would bring additional constraint to the solutions and , as we show below , the solutions will be more stiff .therefore the above boundary conditions minimize the finite size effects and should be preferably used as shown below .the eq.([eqchet ] ) with the above boundary conditions define the finite difference scheme which we use in this present work . in the following section, we analyze the numerical results and compare with different cn semi - implicit approaches for several dimensions with the same double - well potential .we start with the study of the ch equation in 1d because it is faster then 2d and 3d and there are many results which can be used to compare with our simulations .the usual semi - implicit and explicit euler s schemes for the 1d ch equation are not gradient stable and require very short time intervals , while the implicit euler s scheme is gradient stable with .the cn - like scheme that we will use for comparison also suffers from this solvability restriction , and requires a minimum time interval .we performed calculations with linear chains of and sites and with , and .for all these cases we used which is clearly several order of magnitude larger than typical euler s schemes and , despite this large time step , the gradient stability is observed at all times . in fig.1we show the results for the total mass and the free energy in arbitrary units as function of the running time .in fact , using shorter times as shown in fig.1 , we observe that there is an initial transient period in the time evolution before gradient stability is achieved .this can be easily seen at at most small values of in fig.1 for either eyre s or cn - like methods .this transient before the stability takes place is connected to the finite size of the system , the initial conditions and the bcs .as already mentioned , in order to compare our calculations based on eyre s method with other schemes , we have also performed similar simulations with a widely used method , a semi implicit cn - like scheme .briefly , the cn method consists in an average of the right hand side of eq.([eqch ] ) at different times : at and at what clearly results in a semi - implicit scheme in time . according to the above formula , for , this scheme will converge for .indeed we can see in fig.1 that the cn simulations agree very well with the the eyre s results for the case with and a similar transient period before the stability is achieved is observed . to compare with other studies which use the same parameters for the ch equation, we used an initial condition for the linear chain similar to one used by furihata et al , namely , for and otherwise ( at the borders ) . this function is shown in fig.([figut1d ] ) .time steps ( abscissa ) by eyre s and crank - nicholson s scheme at different time steps for comparison.,height=264 ] now that the parameters which gradient stability is established in our system , we study the process of spinodal decomposition in 1d starting with the initial condition given by eq.([equ0 ] ) . using the long time step , we see for the different time profile in fig.2 , that at very few time evolution ( ) the system shows a tendency to decompose in one high and other low density phases . at difference in densities increases and the system separates in regions which almost reach the limit values . at and the system is almost entirely separated in the two phases . at limit configuration is reached and the spinodal decomposition is total with the low density phase at left and the high phase at right as seen in fig.2 .furihata et al have found a similar result starting from the same initial condition using a different method .larger systems behave in a similar fashion with the same sort of decomposition seen in fig.([figut1d ] ) .notice that the system separates in two regions with different density ( -1 at left near and + 1 at the other and at ) .if we had imposed periodic boundary conditions , it would impose an additional constraint and the solutions would be different than that two regions of fig.([figut1d ] ) , namely , three phase regions with one at the center and two of the same kind at the borders .this more complex final configuration takes more time steps to be realized as we checked , performing also calculations with periodic boundary conditions .thus , we demonstrate that the used `` free '' boundary conditions are more `` natural '' and faster than the periodic ones and this is _ an important finding which will be used in the 2d and 3d studies_. for different times starting at the initial condition ( ) till the total spinodal decomposition is attained at . the size of the system is .,height=264 ] now , let s use what we learned above and turn our attention to the 2d problem .we worked with systems of sizes of and and again with or .we adopt the eyre s scheme in the form of an alternating dimension implicit ( adi ) problem , that is , we used a half marching time step in one direction , say , in the -direction and another half time step in the -direction . since most finite difference methods for partial differential equations like the diffusion or heat equation in 2d uses the crank - nicholson scheme , we again made simulations with this scheme which is appropriate to be used in connection with the adi .in fig.3 we plot the results for the 2d mass and energy in arbitrary units .we used which is slightly bigger than the 1d value in order to have larger phase domains and we kept the other parameters equal .the initial conditions were chosen to be small variation around the average value although variation around zero give the same type of result .thus the initial condition is for toward the middle and with toward the boundaries of our square systems . the choice of a different initial condition than is to brake the symmetry and , in this way , better identify the formation of the two density phases ( see fig.([fig2d2000 ] ) below ) .studying the stability conditions , we see that the 2d system requires smaller values of than the 1d system .this is because the boundaries are much larger in 2d and the possibility to mass and energy flow is greatly enhanced and the instability during the initial transient period is larger .for instance , the initial instabilities in the is even bigger than in the system .the cn scheme requires at least which is around four times the minimum 1d value and the eyre s requires at least . in the simulations with the cn scheme ,the mass oscillates wildly up to steps and the energy has also a small increase near this number of marching time steps .the eyre s results for oscillate in the beginning transient of the simulations but become gradient stable after 50000 steps when the mass and the energy stabilize . on the other handthe simulations are gradient stable and does not display any transient oscillation since the beginning for but there is a large loss of average mass , as shown in fig.([fig4 ] ) .system by the crank - nicholson and eyre s scheme for and different time marching steps as specified in the legends.,height=245 ] as mentioned above the value of must be higher than that used in 1d in order to observe the phase separation phenomenon , otherwise the phase domains become very small . in order to study the phase separation through the 2d eyre s method ( with ) , we started with the initial condition of eq.([eq2d0bc ] ) . as expected from the above above analysis , around the time step the system start to separate into the density phases . at the time step the beginning of the spinodal separation is clear as shown in fig.([fig4 ] ) below . ) of the spinodal decomposition with the formation of islands with the values of -1 density in white in a background which tends to + 1 density ( in black ) because the initial conditions with broken symmetry . at the panelbelow , we see the system at a later time ( ) when it is already in equilibrium .the initial configuration is that given by eq.([eq2d0bc]).,title="fig:",height=340 ] ) of the spinodal decomposition with the formation of islands with the values of -1 density in white in a background which tends to + 1 density ( in black ) because the initial conditions with broken symmetry . at the panel below , we see the system at a later time ( ) when it is already in equilibrium .the initial configuration is that given by eq.([eq2d0bc]).,title="fig:",height=340 ] around the time step the the size of the low density islands reach an equilibrium size and the spinodal separation is complete but , the boundaries effects are large .we see a concentration of the low density phase at the borders and this effect prevents the system to reach a total phase separation as we have seen in 1d ( see fig.(2 ) ) .this shows how the boundary effects are very strong and more significant in 2d . after this timestep the system achieves a stable configuration without appreciable changes up to time steps .below we show a snap shot of this equilibrium configuration at .similar results were obtained with the system but in this case there is a proportional increase of the islands size and the equilibrium situation is similar to five times blow up of fig.([fig2d2000 ] ) .simulations with a larger value of the non - linear coefficient ( see discussion after eq.([eq2 ] ) ) enhances the mass flow inside the system and it is possible to achieve complete phase separation at an earlier time , exactly as seen in 1d , but with two phases with smaller values than . in fig.([fig2dlargeb ] ) we plot a situation with and . in this casethere is an easy mass flow through the system and an state of complete phase separation is reached around .notice that the system reaches the equilibrium with nonperiodical bcs . )but for coefficients and ( instead of and , which produces two easily separated phases with ) after the equilibrium is achieved ( ).,height=340 ] we turn now to analyze the 3d system .as we have already found in the 2d case , the problem of mass / energy flow is largely enhanced through the boundaries . to deal with such effects, we have to use a small ( compared with those used in the 2d case ) marching time step of with for the system to minimize errors in the derivatives near the boundaries .notice that this time step is much smaller than the one in 1d and one - two orders of magnitude smaller than in 2d .this is small but still feasible to reach the interesting dynamics even in a typical pc of 1ghz .since eyre s method is very efficient and faster than the majority of other methods used in 3d , this is the only approach that we used in 3d .the generalization of basic euler or runge - kutta methods to the ch equation in 3d are much more slower , and we can see why there are very few studies of the ch equation in 3d . indeedthe values of used in our 3d simulations are of the same order of magnitude of typical euler s method used in 1d simulation as we discussed in the beginning of this section .the adopted initial condition for the 3d system is : we used an average small initial mass of 0.01 just to minimize the mass flow through the boundaries and indeed the mass remains very stable for up to time steps , when it starts a slightly increase .system by the eyre s scheme for .,height=245 ] as concerns the energy , it remains stable for the first step of the simulations but the boundaries effects are manifested near the time step 4000 and increases up to step 13000 . during this transient period of time , although the mass is stable , the simulations are not , as one can conclude by the energy increase in this interval as shown in fig.([fig3dmass ] ) . after this time interval ( )the system stabilizes itself and remains gradient stable up to the rest of the calculations ( up to ) as can be seen by the decrease of the energy in fig.([fig3dmass ] ) .comparison with the 1d and 2d systems reveals clearly that the stability is more difficult to be established as the dimensionality increases , as it is natural when one uses any finite difference scheme . ) ) , we show at the top panel the beginning ( ) of the spinodal decomposition with the formation of small islands with the values because the initial condition with broken symmetry . at the down panel , the system at a later time ( ) during the process of phase separation.,title="fig:",height=340 ] ) ) , we show at the top panel the beginning ( ) of the spinodal decomposition with the formation of small islands with the values because the initial condition with broken symmetry . at the down panel , the system at a later time ( ) during the process of phase separation.,title="fig:",height=340 ] as fig.([fig3dmass ] ) shows , the system relax from the initial conditions and becomes uniform around . around spinodal decomposition is happening and the oscillation in the densities forms an almost uniform pattern as shown in the top panel of fig.([fig3da ] ) .this figure shows the configuration of the middle plane in the z - direction ( ) of the . at the down panel of fig.([fig3da ] ) we show a snap - shot of which shows the beginning of of the phase separation process as the time flows . above two phases start to segregate and this segregation process is smooth and unconditionally gradient stable as seen in fig.([fig3db ] ) . ) for many time steps later . on the top panelwe plot the case for and on the down panel the .,title="fig:",height=340 ] ) for many time steps later . on the top panelwe plot the case for and on the down panel the .,title="fig:",height=340 ] at much later time like the phase separation phenomenon is on the way . at phase separation is almost total in the plane passing by the middle of the system , as it is shown in the down panel of fig.([fig3db ] ) .we have shown in this work that the semi - implicit method due to eyre , which divides the potential in two parts and treats the contractive part implicitly and the expansive explicitly combined with free boundary conditions achieves unconditional gradient stability in 1d , 2d and 3d .the method also allows the use of very long marching time steps , compared with usual explicit or implicit euler s schemes , what is convenient because it captures very easily the short and long dynamics , characteristic of the cahn - hilliard equation .our simulations have demonstrated that gradient stability and spinodal decomposition is achieved faster than the normal euler and crank - nicholson methods what is highly deseirable specially in 3d where normal methods fail to capture the long dynamics due to the required very short time steps .the use of `` nonflow '' free bcs are more appropriated and also converges faster to a final equilibrium configuration than the widely used periodic bcs .we believe that the present systematic study may be pertinent to the several branchs of physics .the fast scheme developed here are suitable to the study of the spinodal dynamics and also to high correlated electron systems with phase separation .we expect therefore , to perform these works in the near future .we gratefully acknowledge partial financial aid from brazilian agencies cnpq and faperj .lifshitz and v.v.slyozov , j. phys .solids * 19 * , 35 ( 1961 ) .hohenberg and b.i halperin , rev .phys . * 49 * , 435 ( 1977 ) .gunton , m. san miguel , and p.s .sahni , in `` phase transitions and critical phenomena '' vol . 8 , edited by c. domb and j.l .lebowitz ( new york , academic press , 1983 ) .bray , adv . phys . * 43 * , 347 ( 1994 ) .cahn and j.e .hilliard , j. chem .phys , * 28 * , 258 ( 1958 ) .j.m.traquada , b.j .sternlieb , j.d , axe , y. nakamura , and s. uchida , nature ( london),*375 * , 561 ( 1995 ) .a. bianconi n. l saini , a. lanzara , m. missori , t. rossetti , h. oyanagi , h. yamaguchi , k. oka , t.ito , phys .rev . lett . *76 * , 3412 ( 1996 ) . c. howald , p. fournier , and a. kapitulnik , phys. rev . * b64 * , 100504 ( 2001 ). s. h. pan , j. p. oneal , r. l. badzey , c. chamon , h. ding , j. r. engelbrecht , z. wang , h. eisaki , s. uchida , a.k .gupta , k. w. ng , e. w. hudson , k. m. lang , j. c. davis nature , 413 , 282 - 285 ( 2001 ) and cond - mat/0107347 .ovchinnikov , s.a .wolf , v.z .kresin , phys . rev . *b63 * , 064524 , ( 2001 ) , and physica * c341 - 348 * , 103 , ( 2000 ) .de mello , m.t.d .orlando , e.s .caixeiro , j.l .gonzlez , and e. baggio - saitovich , phys . rev .* b66 * , 092505 ( 2002 ) .d. mihailovic , v.v .kabanov , k.a .mller , europhys . lett . * 57 * , 254 ( 2002 ) .de mello , e.s .caixeiro , and j.l .gonzlez , phys . rev . *b67 * , 024502 ( 2003 ) .see several articles in `` proceedings of the workshop on intrinsic multiscale structure and dynamics of complex eletronic oxides '' , i.c.t.p .bishop , s.r .shenoy , and s. sridhar , eds . ,world scientific , new jersey ( 2002 ) .caixeiro , j.l .gonzlez , and e.v.l .phys . rev . * b69 * , 024521 ( 2004 ) .e. dagotto , t. hotta and , a. moreo , phys . rep . * 344 * , 1 ( 2001 ) .e. dagotto , j. burgy , a. moreo , sol .. comm . * 126 * , 9 , ( 2003 ) .j.s kim , k. kang and j.s .lowengrub , `` conservative multigrid methods for cahn - hilliard fluids '' , j. comput . phys . in press d.j .eyre , `` unconditionally gradient stable time marching the cahn - hilliard equation '' preprint , ( 1998 ) , and `` an unconditional stable one - step scheme for gradient systems '' ( http://www.math.utah.edu/ eyre / research / methods / stable.ps ) ( 1998 ) .d. furihata and t. matuso , rims preprint no.1271 , kyoto university , ( 2000 ) .elliot and d.a .french , i m a j. appl .math , * 38 * , 97 ( 1987 ) b.p .vollmayr - lee and a.d .rutenberg , cond - mat/0308174 , ( 2003 ) .a. chakrabarti , r. toral , j.d .gunton , and m. muthukumar , phys .lett . * 63 * , 2072 ( 2001 ) .r. toral , a. chakrabarti and j.d .gunton , phys .rev.*a45 * , r2147 ( 1992 ) .a. milchev , d.w .heermann , and k. binder , acta metal 36 , 377 ( 1988 ) .ames , `` numerical methods for partial differential equations '' , academic press , new york ( 1977 ) . j. douglas jr . , r. duran and p. pietra , in `` numerical approximation of partial differential equations '' , e. ortiz , ed . , north - holland , amsterdam ( 1987 ) . k.c . park in `` computer and structure '' , vol.7 , pp 343 - 353 , pergamon press , london ( 1977 ) . j. wang , d.y .xing , j. dong and p.h .hor , phys . rev . *b62 * , 9827 ( 2000 ) .mathews , `` numerical methods for mathemathics , science , and engineering '' , prentice hall , englewood cliffs , n.j .elliot in `` mathematical models for phase change problems '' , j.f .rodriguez ed ., birkhuser verlag , basel ( 1989 ) .
|
the cahn - hilliard equation is related with a number of interesting physical phenomena like the spinodal decomposition , phase separation and phase ordering dynamics . on the other hand this equation is very stiff an the difficulty to solve it numerically increases with the dimensionality and therefore , there are several published numerical studies in one dimension ( 1d ) , dealing with different approaches , and much fewer in two dimensions ( 2d ) . in three dimensions ( 3d ) there are very few publications , usually concentrate in some specific result without the details of the used numerical scheme . we present here a stable and fast conservative finite difference scheme to solve the cahn - hilliard with two improvements : a splitting potential into a implicit and explicit in time part and a the use of free boundary conditions . we show that gradient stability is achieved in one , two and three dimensions with large time marching steps than normal methods .
|
the fundamental properties and possible applications of non - classical light have been extensively investigated starting from the mid-70s . in the past decadesuch novel fields of application of non - classical light have emerged , as quantum information and quantum imaging .quantum imaging uses spatially multimode non - classical states of light with quantum fluctuations suppressed not only in time , but also in space .the promising idea is to introduce optical parallelism inherent in quantum imaging into various protocols of quantum information , such as quantum teleportation , quantum dense coding , quantum cryptography etc ., thus increasing their information possibilities .the continuous variables quantum teleportation protocol proposed in and experimentally realized in has been recently extended for teleportation of optical images in .quantum dense coding has been firstly proposed and experimentally realized for discrete variables , _ qubits _ , and later generalized for continuous variables in . in this paperwe propose the continuous variables quantum dense coding protocol for optical images .our scheme extends the protocol to the essentially multimode in space and time optical communication channel .this generalization exploits the inherent parallelism of optical communication and allows for simultaneous parallel dense coding of an input image with elements . in the case of a single spatial mode considered in onehas .we calculate the shannon mutual information for a stream of classical input images in coherent state . in this paperwe assume arbitrarily large transverse dimensions of propagating light beams and the unlimited spatial resolution of photodetection scheme .that is , we actually find an upper bound on the spatio - temporal _ density _ of the information stream in bits per .this density depends on the degree of squeezing and entanglement in non - classical illuminating light .two sets of spatio - temporal parameters play an important role in our protocol : i ) the coherence length and the coherence time of spatially - multimode squeezing and entanglement , and ii ) the spatio - temporal parameters of the stream of input images . in our analysiswe assume that the sender ( alice ) produces a uniform ensemble of images with gaussian statistics , characterized by certain resolution in space and time ( the alice s grain ) .we demonstrate that the essentially multimode quantum communication channel provides much higher channel capacity than a single - mode quantum channel due to its intrinsic parallel nature .the density of the information stream is in particular limited by diffraction .we find that the role of diffraction can be partially compensated compensation by lenses properly inserted in the scheme .an important difference between the classical communication channel ( i. e. with vacuum fluctuations at the input of the scheme instead of multimode entangled light ) and its quantum counterpart is that in quantum case there exists an optimum spatial density of the signal image elements , which should be matched with the spatial frequency band of entanglement . in sec .ii we describe in detail the scheme of the dense coding protocol for optical images .the channel capacity of our communication scheme is evaluated in sec .we make our conclusions in sec .the relevant to our analysis properties of spatially - multimode squeezing are given in appendix .the optical scheme implementing the protocol is shown in fig . 1 .compared to the generic continuous variables dense coding scheme , here the light fields are assumed to be spatially - multimode . [ schematic ] at the input , the spatially - multimode squeezed light beams with the slow field amplitudes and in the heisenberg representation , are mixed at the symmetrical beamsplitter . for properly chosen orientation of the squeezing ellipses of the input fields the scattered fields and are in the entangled state with correlated field quadrature components , as illustrated in fig . 1 . the classical signal image field is created by alice in the first beam by means , e. g. , of the controlled ( with given resolution in space - time ) mixing device with almost perfect transmission for the non - classical field . the receiver ( bob ) detects the entangled state of two beams by means of optical mixing on the symmetrical output beamsplitter and the homodyne detection of quadrature components of the output fields and .this allows for measurement of both quadrature components of the image field with effective quantum noise reduction . one can give a more straightforward explanation of the sub - shot - noise detection of the signal in the scheme shown in fig . 1 .for the symmetrical scattering matrix of the beamsplitters \{ r_nm } = ( rcr 1 & & 1 + 1 & & -1 + ) , [ beamsplitter ] and equal optical paths of two beams , the effective mach - zehnder interferometer directs the input squeezed field onto the detector , and similar for , thus allowing for sub - shot - noise detection of the squeezed quadrature components in both beams .the fields at the inputs of the homodyne detectors and are b_n(,t ) = s_n(,t ) + a(,t ) , [ output_field ] where . in the paraxial approximation ,the slow amplitude of light field is related to the creation and annihilation operators and for the plane waves with the transverse component of the wave vector and frequency by b_n(,t)= _ , b_n ( , ) e^i(-t ) .[ fourier_discrete ] in the case of large quantization volume with the transverse and longitudinal dimensions and , the summation is performed over the following values of and : , and with and taking the values .the free - field commutation relations are given by & = & _ n , n(-)(t - t ) , + & = & _ n , n_,_,. [ commutators ] the value of the irradiance ( in photons per ) is equal to , and the number of photons in the field mode ( ) , localized in the quantization volume , is .the observed photocurrent densities i_1(,t ) & = & b_0 , + i_2(,t ) & = & b_0 , [ currents ] have the following fourier amplitudes i_1 ( , ) & = & b_0 , + i_2 ( , ) & = & b_0 , [ currents_fourier ] where ( taken as real ) and are the local oscillator amplitudes used in the homodyne detection ( see the discussion in subsection [ subsection_shannon ] ) . here and in what follows we denote the fourier amplitudes of the fields and the photocurrent densities by the lower - case symbols .the squeezing transformation performed by the optical parametric amplifiers ( opas ) , illuminating the inputs of the scheme , can be written as follows : s_n ( , ) = u_n ( , ) c_n ( , ) + v_n ( , ) c_n^(-,- ) , [ squeezing ] where the coefficients and depend on the pump - field amplitudes of the opas , their nonlinear susceptibilities and the phase - matching conditions ( see appendix for definitions of the squeezing parameters ) .the input fields of the opas are assumed to be in vacuum state .after some calculation we obtain for the fourier amplitudes of the photocurrent densities : i_n ( , ) = b_0\{f_n(,)+a_n ( , ) } , [ currents_final ] where e^{-i\phi_1(\q,\omega)}c_1(\q,\omega)\,+\\\ ] ] , [ noise_1 ] and e^{-i\phi_2(\q,\omega)}c_2(\q,\omega)\,+\\\ ] ] , [ noise_2 ] represent the quantum fluctuations of the fields at both photodetectors , and a_1(,)&= & , + a_2(,)&= & , [ signals ] are the detected by bob components of the alice s signal image . here is the fourier transform of classical field , defined in analogy to ( [ fourier_discrete ] ) .in order to estimate the channel capacity one has to define the degrees of freedom of the noise and the signal in our spatially - multimode scheme .we shall assume that all elements of the scheme : opas non - linear crystals , beamsplitters , modulator and ccd matrices of detectors , have large transverse dimensions .the squeezed light fields are the stationary in time and uniform in the cross - section of the beams random variables .that is , all correlation functions of these fields are translationally invariant in the space . for the observed photocurrent densitiesthis implies that any pair of the fourier amplitudes ( [ noise_1 ] ) and ( [ noise_2 ] ) for given ( ) and ( ) result from squeezing of the input fields and and therefore is independent of any other pair . on the other hand ,since the observed photocurrent densities are real the fourier amplitudes and are not independent , while i_n ( , ) = i_n^(-,- ) .[ current_conjugate ] for this reason we consider as independent random variables only the noise terms in fourier amplitudes for .the real and imaginary parts of the complex amplitudes for are related to the amplitudes of the real photocurrent noise harmonics and , directly recovered by bob from his measurements .the fourier amplitudes of the photocurrent densities ( [ currents_final ] ) satisfy the relation ( [ current_conjugate ] ) and therefore it is sufficient to take into account only .the random signal sent by alice is assumed to be stationary and uniform in the cross - section of the beams .the amplitudes for , , are taken as independent complex gaussian variables with variance depending on .since the transformation ( [ signals ] ) is unitary , the fourier classical amplitudes for any ( ) are also statistically independent , and the quantity ^a(,)=|a(,)|^2 , [ signal_variance ] is the mean energy of alice s signal wave in the quantization volume , where . herethe statistical averaging is performed with the gaussian complex weight function ^a_,(a(,))= \{-}. [ gaussian ] in what follows we assume gaussian spectral profile of width for the ensemble of input images in spatial frequency domain , ^a ( , ) = ( 2)^3 ( - ) ( ) , ( ) = \ { 1/_a & ||_a/2 , + 0 & ||>_a/2 , .[ spectrum ] and , for the sake of simplicity , the narrow rectangular spectral profile of width and height in the temporal frequency domain . since _ , _a(,)= l^2tp , [ photon_density ] the total average density of photon flux in the image field per is .the variances of the observables are finally found in the form \{i_n ( , ) , i_n^(,)}_+= b_0 ^ 2 , [ variances ] where denotes the anticommutator .the quantum noise variances in both detection channels are given by _\{f_n ( , ) , f_n^(,)}_+ , [ noise_variances ] _1^ba(,)= e^2r_1(,)^2_1 ( , ) + e^-2r_1(,)^2_1 ( , ) , [ noise_variance_1 ] _ 2^ba(,)= e^-2r_2(,)^2_2 ( , ) + e^2r_2(,)^2_2 ( , ) .[ noise_variance_2 ] using these results we can evaluate the shannon mutual information for our dense coding scheme .it is well known that in the case of single - mode squeezed light field the statistics of its quadrature amplitudes are gaussian and can be characterized , e. g. , by a gaussian weight function in the wigner representation . in the homodyne detection of squeezed light ,the statistics of the photocounts are also gaussian due to the linear relation between the field amplitude and the photocurrent density .the discussion of the homodyne detection in terms of the characteristic function can be found in .some considerations for the homodyne detection of spatially multimode fields are presented in . in our quantum dense coding schemethe statistically independent degrees of freedom of the noise and the signal are labeled by the frequencies for .one can consider our quantum channel as a collection of the statistically independent parallel gaussian communication channels in the fourier domain .the mutual information between alice and bob for given detector and frequencies is defined as i^s_n(,)= h_n^b(,)-^a .[ information_def ] here is the entropy of bob s observable , and is the averaged over the ensemble of alice s signals entropy of noise , introduced by the channel . for a gaussian channel the mutual information is given by i^s_n(,)= ( 1 + ) .[ information_gaussian ] the quantum noise suppression within the frequency range of effective squeezing and entanglement increases the signal - to - noise ratio at the right side of ( [ information_gaussian ] ) .the total mutual information , associated with the large area and the large observation time , is defined as a sum over all degrees of freedom and is related to the _ density of the information stream in bits per _ : i^s=_n,,>0 i^s_n ( , ) = l^2tj , [ information_result ] where j = d_>0 d_ n=1,2 i^s_n ( , ) .[ information_density ] for qualitative and numerical analysis it is natural to associate such quantities as the density of the information stream and of the photon flux with the physical parameters present in our quantum dense coding scheme . squeezing and entanglement produced by the type - i optical parametric amplifiers ( opa s ) , are characterized by the effective spectral widths and in the spatial and temporal frequency domain . the coherence area in the cross - section of the beams and the coherence time are introduced as and . for simplicitywe assume that both opa s have the same coherence area and coherence time .the correlation area and the correlation time of non - stationary images sent by alice , are related to the spectral widths of the signal and by and .we consider the broadband degenerate collinear phase matching in the traveling - wave type - i opa s .the coherence time of the spontaneous downconversion will be typically short compared to the time duration of the alice s movie frame .the dimensionless information stream and the dimensionless input photon flux are defined by , .that is , we relate both quantities to the time duration of the alice s movie frame and the coherence area of squeezing and entanglement .the optimum entanglement conditions in the opa s are given by r_1 ( , ) & = & r_2 ( , ) r ( , ) , + _ 1 ( , ) & = & _ 2 ( , ) /2 ( , ) , + ( 0,0)&=&/2 . [ orthogonal_ellipses ]we find the dimensionless information stream in the following form : = d \{1+p ( - ) } , [ information_final ] where ^ba(,0 ) = e^2r(,0)^2(,0 ) + e^-2r(,0)^2(,0 ) , and dimensionless spatial frequency is defined as .the relative spectral width of the alice s signal can be interpreted as the number of image elements per coherence length , i. e. the relative linear density of image elements . in what followswe assume a simple estimate , related to the diffraction spread of parametric downconversion light inside the opa crystal , where is the wavenumber and is the crystal length .quantum noise in the dense coding scheme is effectively reduced for optimum phase matching of squeezed beams . as shown in ,an important factor is the spatial - frequency dispersion of squeezing , that is , the -dependence of the phase of the squeezed quadrature component .this dependence is due to the diffraction inside the opa .a thin lens properly inserted into the light beam can effectively correct the -dependent orientation of squeezing ellipses , as illustrated in fig . 2 .= 3 ] without ( 2 ) and with ( 3 )phase correction.,title="fig:",width=302 ] [ inverse_variance ] in our plots for the mutual information density we keep constant the coherence area , the degree of squeezing , and the density of signal photons flux .the dependence of mutual information density on the relative linear density of the image elements is shown in fig .4 . without ( 2 ) andwith ( 3 ) phase correction .the density of signal photons is ( fig.4a ) , ( fig .4b).,title="fig:",width=302 ] without ( 2 ) and with ( 3 ) phase correction .the density of signal photons is ( fig.4a ) , ( fig .4b).,title="fig:",width=302 ] [ mutual_information ] for ( large image elements , ) , the mutual information density increases linearly with , since this means improvement of spatial resolution in the input signal .multimode quantum entanglement between two channels of the scheme results in much higher channel capacity compared to the classical limit ( vacuum noise at the input of the scheme ) . on the other hand , for ( image elements much smaller than the coherence length ) , the effect of entanglement on the channel capacity is washed out and goes down to the vacuum limit .this is due to the fact that in the limit almost all spatial frequencies of the signal are outside the spatial - frequency band of the effective noise suppression , and the channel capacity is finally limited by vacuum noise .the phase correction of squeezing and entanglement significantly improves the channel capacity , since it brings the spatial frequency band of the effective noise suppression to its optimum value .it eliminates the destructive effect of the amplified ( stretched ) quadrature of the noise field at the higher spatial frequencies , as seen e. g. from fig .in this paper we have extended the continuous variables dense coding protocol proposed in onto the optical images and calculated the _ spatio - temporal density _ of the shannon mutual information .our multimode quantum communication channel provides much higher channel capacity due to its intrinsic parallel nature .we have considered the role of diffraction in our protocol and have found how to optimize its performance by means of a lens properly inserted into the scheme .we have shown that , by contrast to the classical communication channel , there exists an optimum spatial density of image elements , matched to the spatial - frequency band of squeezing and entanglement .the authors thank l. a. lugiato and c. fabre for valuable discussions .this work was supported by the network quantim ( ist-2000 - 26019 ) of the european union , by the intas under project 2001 - 2097 , and by the russian foundation for basic research under project 03 - 02 - 16035 .the research was performed within the framework of gdre `` lasers et techniques optiques de linformation '' .the main results for spatially - multimode squeezing are summarized in .the coefficients of the squeezing transformation ( [ squeezing ] ) satisfy the conditions & & |u_n(,)|^2 - |v_n(,)|^2 = 1 , [ conditions ] + & & u_n(,)v_n(-,- ) = u_n(-,-)v_n ( , ) , which are necessary and sufficient for preservation of the free - field commutation relations ( [ commutators ] ) .the spatial and temporal parameters of squeezed and entangled light fields essentially depend on the orientation angle of the major axes of the squeezing ellipses , _ n ( , ) = \{u_n ( , ) v_n(-,- ) } , [ psi ] and on the degree of squeezing , e^r_n ( , ) = |u_n(,)| is given by _n ( , ) = - \{u_n ( , ) v_n^*(-,-)}. [ phi ] in analogy to the single - mode epr beams , the multimode epr beams are created if squeezing in both channels is effective , and the squeezing ellipses are oriented in the orthogonal directions . for the type - i phase - matched traveling - wave opas ,the coefficients and are given by \big\ } \left[\cosh \gamma(\q,\omega ) + \frac{i \delta(\q,\omega)}{2 \gamma(\q,\omega ) } \sinh \gamma(\q,\omega)\right],\ ] ] v ( , ) = \{i } ( , ) .[ uandv ] here is the length of the nonlinear crystal , is the longitudinal component of the wave vector for the wave with frequency and transverse component .the dimensionless mismatch function is given by ( , ) = ( k_z(,)+k_z(-,-)-k_p)l ( 2 k - k_p)l+k_l ^2 - q^2 l / k , [ mismatch ] where is the wave number of the pump wave , in the degenerate case .we have assumed the paraxial approximation .the parameter is defined as ( , ) = , [ gamma ] where is the dimensionless coupling strength of nonlinear interaction , taken as real for simplicity .it is proportional to the nonlinear susceptibility , the length of the crystal , and the amplitude of the pump field .
|
we propose quantum dense coding protocol for optical images . this protocol extends the earlier proposed dense coding scheme for continuous variables [ s. l. braunstein and h. j. kimble , phys . rev . a * 61 * , 042302 ( 2000 ) ] to an essentially multimode in space and time optical quantum communication channel . this new scheme allows , in particular , for parallel dense coding of non - stationary optical images . similar to some other quantum dense coding protocols , our scheme exploits the possibility of sending a classical message through only one of the two entangled spatially - multimode beams , using the other one as a reference system . we evaluate the shannon mutual information for our protocol and find that it is superior to the standard quantum limit . finally , we show how to optimize the performance of our scheme as a function of the spatio - temporal parameters of the multimode entangled light and of the input images . quant - ph/0501068
|
molecular communications ( mc ) via diffusion is one of the most promising approaches for communications among nanonetworks . in this approach, information is conveyed to a nano - receiver by nano - transmitter s choice of the concentration , type , or the release time of the molecules diffused into the medium .the motion of the released molecules are described by a brownian motion process , that can be with or without drift .a molecule released from a transmitter can follow different trajectories before reaching a receiver .thus , diffusion - based communication suffers from intersymbol interference ( isi ) due to molecules from previous transmission that follow longer trajectories before hitting the receiver . the effect of isi in diffusion based mc has been studied extensively in the literature , with the general conclusion that isi reduces the performance in a communication setup consisting of one transmitter and one receiver .the authors in have proposed two modulation schemes , _ concentration shift keying ( csk ) _ and _ molecular shift keying ( mosk ) _ , where both suffer from the isi caused by molecules from previous transmissions .authors in have proposed the _ molecular concentration shift keying ( mcsk ) _ modulation scheme , where the main idea is to use the distinct molecule types for consecutive time slots at the transmitter , thus effectively suppressing the isi . in authors take advantage of a limited memory in the transmitter to propose an adaptive transmission rate scheme that reduces isi .an on - off mosk modulation scheme has been proposed in which increases the transmission rate compared to mosk but still suffers from the isi . in , a modulation , called run - length hybrid aware , has been presented , which considers the runs ( the same value occurs in several consecutive bits ) for encoding : the run - value is encoded in molecule type and the run - length is encoded by csk .though suffering from the isi , this scheme is stated to improve the transmission rate in comparison with csk and mosk considering the ideal channel with no transmission errors .authors in propose a pre - equalization method to mitigate isi where the signal at the receiver is considered as the difference between the number of the received molecules of each type while two types of molecules is used at the transmitter .further , several channel coding schemes are proposed to mitigate isi in diffusion based mc where they employ additional bits to intended input codewords in order to compensate errors from the isi or noisy channel ._ motivation for isi - free modulations : _ given the generally negative effect of isi , it is of interest to design modulation or channel coding schemes that would prevent or mitigate the isi . to mitigate the isi, the transmitter needs to make sure that molecules of the same type are released at time instances that are sufficiently far apart .therefore , the transmissions will not have any superposition .this simplifies the receiver , who monitors the presence and intensity of various molecule types in the environment .furthermore , isi - free modulation schemes are more robust when the receiver knows little about the channel state information ( csi ). a high receiver molecule intensity can be either due to a high transmission amplitude or a very conductive channel . in lack of csi and potential presence of isi , observing a certain intensity of a molecule type , the receiver has to meet the challenge of deciding whether the received molecule density is due to superposition of various successive transmissions or due to temporary channel conductivness . in this paper, we study isi - free modulation schemes for time - slotted transmission model .the problem of finding the maximum achievable rate is converted into the design of a codebook , consisting of string of symbols , with each symobl indicating the type and concentration of a molecule transmission .the isi - free condition implies that symbols of the same type should not be used close to each other .this problem can be expressed in terms of the constrainted coding problem .constrained coding has found applications in storage and communication systems ; its basic idea is to avoid transmission of input sequences that are more prone to error ( e.g. see ( * ? ? ?* chapter 1 ) ) .similarly , in our setup , we would like to avoid input sequences that cause isi .acheiving the constrained coding capacity requires blocklengths tending to infinity and complicated coding schemes . on the other hand , due to the limited resources available at nano - level , onlysimple _ schemes are eligible for being implemented in practice .further , we desire short blocklengths ( limited delay ) and modest memory resources .our main contribution in this paper is to construct simple isi - free modulation schemes for a given maximum transmission delay .it is shown ( numerically ) these simple schemes are near optimal , _ i.e. , _ they become close to the constrained coding capacity . the rest of this paper is organized as follows . in section [ secii ] ,isi - free modulation is formally defined and modeled as a constraint graph .a summary of the notation used in this paper is given in table [ notation ] at the beginning of this section .section [ sec3 ] introduces the isi - free modulation capacity under the limited delay ( blocklength ) . in section [ sec4 ] a class of modulation strategies based on a modified version of the constraint graphis proposed as a lower bound on the capacity .the numerical results and conclusions are provided in section [ sec5 ] and [ sec6 ] respectively ..summary of the notation [ cols="^,^",options="header " , ] [ rest ]we considered the problem of efficient rate isi - free modulation in molecular communication under limited delay constraints .the isi - free condition was expressed by a constrained graph where maximum achievable rate of modulations constructed over this graph , i.e , isi - free capacity , is well known in constrained coding literature .motivated by limited resources in nanomachines , we then defined the isi - free capacity under delay constraints , .afterwards , a suboptimal approach to determine the encoder and decoder functions ( modulation scheme ) that gives a lower bound on was proposed .the results show that these simple schemes are near optimal , i.e , become close to the constrained coding capacity .t. nakano , m.j .moore , f. wei , a.v .vasilakos , j. shuai , molecular communication and networking : opportunities and challenges , " __ieee transactions on nanobioscience__ , vol .135 - 148 , 2012 .moore , michael j and suda , tatsuya and oiwa , kazuhoro , molecular communication : modeling noise effects on information rate " , _ ieee transactions on nanobioscience _, vol . 8 , no . 2 , 2009 . b. atakan , s. galmes , and ob .akan , nanoscale communication with molecular arrays in nanonetworks . " __ieee transactions on nanobioscience__ , vol .149 - 160 , 2012 .m. pierobon and i. f. akyildiz , a statistical - physical model of interference in diffusion - based molecular nanonetworks ." __ieee transaction on communications__ , vol .62 , no . 6 , 2014 .g. aminian , h. arjmandi , a. gohari , m. nasiri kenari u. mitra , capacity of diffusion based molecular communication networks in the lti - poisson model , " _ ieee international conference on communications _ , june 2015 .a. einolghozati , m. sardari , f. fekri , design and analysis of wireless communication systems using diffusion - based molecular communication among bacteria , " , _ ieee transactions on wireless communications _ , vol.12 , no.12 , pp.6096,6105 , december 2013 .h. arjmandi , a. gohari , m. nasiri - kenari and farshid bateni , diffusion based nanonetworking : a new modulation technique and performance analysis , " __ieee communications letters__ , vol .645 - 648 , 2013 .m. movahednasab , m. soleimanifar , a. gohari , m. nasiri kenari u. mitra , adaptive molecule transmission rate for diffusion based molecular communication , " _ ieee international conference on communications _ , june 2015 .pudasaini , subodh , seokjoo shin , and kyung sup kwak . run - length aware hybrid modulation scheme for diffusion - based molecular communication , " _14th international symposium on communications and information technologies ( iscit ) _ , 2014 .m. s. leeson , d. h. matthew , forward error correction for molecular communications , " __nano communication networks__, vol .161 - 167 , 2012 .p. y. ko , et al . a new paradigm for channel coding in diffusion - based molecular communications : molecular coding distance function , " __global communications conference ( globecom)__ , 2012 .shih , po - jen , chia - han lee , and ping - cheng yeh . channel codes for mitigating intersymbol interference in diffusion - based molecular communications , " __global communications conference ( globecom)__ , 2012 . c. e. shannon , a mathematical theory of communication , " _ the bell system technical journal _g. bocherer and r. mathar , matching dyadic distributions to channels , " _ ieee data compression conference ( dcc ) _ , 2011 .
|
a diffusion molecular channel is a channel with memory , as molecules released into the medium hit the receptors after a random delay . coding over the diffusion channel is performed by choosing the type , intensity , or the released time of molecules diffused in the environment over time . to avoid intersymbol interference ( isi ) , molecules of the same type should be released at time instances that are sufficiently far apart . this ensures that molecules of a previous transmission are faded in the environment , before molecules of the same type are reused for signaling . in this paper , we consider isi - free time - slotted modulation schemes . the maximum reliable transmission rate for these modulations is given by the constrained coding capacity of the graph that represents the permissible transmission sequences . however , achieving the constrained coding capacity requires long blocklengths and delays at the decoder , making it impractical for simple nanomachines . the main contribution of this paper is to consider modulations with small delay ( short blocklength ) and show that they get very close to constrained coding capacity .
|
in real world perception , we are often confronted with the problem of selectively attending to objects whose features are intermingled with one another in the incoming sensory signal . in computer vision , the problem of scene analysis is to partition an image or video into regions attributed to the visible objects present in the scene . in audiothere is a corresponding problem known as auditory scene analysis , which seeks to identify the components of audio signals corresponding to individual sound sources in a mixture signal .both of these problems can be approached as _ segmentation _ problems , where we formulate a set of `` _ _ elements _ _ '' in the signal via an indexed set of features , each of which carries ( typically multi - dimensional ) information about part of the signal . for images ,these elements are typically defined spatially in terms of pixels , whereas for audio signals they may be defined in terms of time - frequency coordinates .the segmentation problem is then solved by segmenting elements into groups or partitions , for example by assigning a group label to each element . notethat although _ clustering _ methods can be applied to segmentation problems , the segmentation problem is technically different in that clustering is classically formulated as a domain - independent problem based on simple objective functions defined on pairwise point relations , whereas partitioning may depend on complex processing of the whole input , and the task objective may be arbitrarily defined via training examples with given segment labels .segmentation problems can be broadly categorized into _ class - based _ segmentation problems where the goal is to learn from training class labels to label known object classes , versus more general _ partition - based _ segmentation problems where the task is to learn from labels of partitions , without requiring object class labels , to segment the input .solving the partition - based problem has the advantage that unknown objects could then be segmented . in this paper, we propose a new partition - based approach which learns _ embeddings _ for each input elements , such that the correct labeling can be determined by simple clustering methods .we focus on the single - channel audio domain , although our methods are applicable to other domains such as images and multi - channel audio . the motivation for segmenting in this domain , as we shall describe later , is that using the segmentation as a mask , we can extract parts of the target signals that are not corrupted by other signals . since class - based approaches are relatively straightforward , and have been tremendously successful at their task , we first briefly mention this general approach . in class based vision models , such as ,a hierarchical classification scheme is trained to estimate the class label of each pixel or super - pixel region . in the audio domain , single - channel speech separation methods , for example ,segment the time - frequency elements of the spectrogram into regions dominated by a target speaker , either based on classifiers , or generative models . in recent years, the success of deep neural networks for classification problems has naturally inspired their use in class - based segmentation problems , where they have proven very successful .however class - based approaches have some important limitations .first , of course , the assumed task of labeling known classes fundamentally does not address the general problem in real world signals that there may be a large number of possible classes , and many objects may not have a well - defined class .it is also not clear how to directly apply current class - based approaches to the more general problem .class - based deep network models for separating sources require explicitly representing output classes and object instances in the output nodes , which leads to complexities in the general case .although generative model - based methods can in theory be flexible with respect to the number of model types and instances after training , inference typically can not scale computationally to the potentially larger problem posed by more general segmentation tasks .in contrast , humans seem to solve the partition - based problem , since they can apparently segment well even with novel objects and sounds .this observation is the basis of gestalt theories of perception , which attempt to explain perceptual grouping in terms of features such as proximity and similarity .the partition - based segmentation task is closely related , and follows from a tradition of work in image segmentation and audio separation .application of the perceptual grouping theory to audio segmentation is generally known as computational auditory scene analysis ( casa ) ._ spectral clustering _ is an active area of machine learning research with application to both image and audio segmentation .it uses local affinity measures between features of elements of the signal , and optimizes various objective functions using spectral decomposition of the normalized affinity matrix .in contrast to conventional _ central clustering _algorithms such as -means , spectral clustering has the advantage that it does not require points to be tightly clustered around a central prototype , and can find clusters of arbitrary topology , provided that they form a connected sub - graph . because of the local form of the pairwise kernel functions used , in difficult spectral clustering problems the affinity matrix has a sparse block - diagonal structure that is not directly amenable to central clustering , which works well when the block diagonal affinity structure is dense . the powerful but computationally expensive eigenspace transformation step of spectral clustering addresses this , in effect , by `` fattening '' the block structure , so that connected components become dense blocks , prior to central clustering .although affinity - based methods were originally unsupervised inference methods , multiple - kernel learning methods such as were later introduced to train weights used to combine separate affinity measures .this allows us to consider using them for partition - based segmentation tasks in which partition labels are available , but without requiring specific class labels . in ,this was applied to speech separation by including a variety of complex features developed to implement various auditory scene analysis grouping principles , such as similarity of onset / offset , pitch , spectral envelope , and so on , as affinities between time - frequency regions of the spectrogram .the input features included a dual pitch - tracking model in order to improve upon the relative simplicity of kernel - based features , at the expense of generality . rather than using specially designed features and relying on the strength of the spectral clustering framework to find clusters , we propose to use deep learning to derive embedding features that make the segmentation problem amenable to simple and computationally efficient clustering algorithms such as -means , using the partition - based training approach .learned feature transformations known as _embeddings _ have recently been gaining significant interest in many fields .unsupervised embeddings obtained by auto - associative deep networks , used with relatively simple clustering algorithms , have recently been shown to outperform spectral clustering methods in some cases .embeddings trained using pairwise metric learning , such as word2vec using neighborhood - based partition labels , have also been shown to have interesting invariance properties .we present below an objective function that minimizes the distances between embeddings of elements within a partition , while maximizing the distances between embeddings for elements in different partitions .this appears to be an appropriate criterion for central clustering methods .the proposed embedding approach has the attractive property that all partitions and their permutations can be represented implicitly using the fixed - dimensional output of the network .the experiments described below show that the proposed method can separate speech using a speaker - independent model with an open set of speakers at test time . as in , we derive partition labels by mixing signals together and observing their spectral dominance patterns . after training on a database of mixtures of speakers trained in this way , we show that without any modification the model shows a promising ability to separate three - speaker mixtures despite training only on two - speaker mixtures .although results are preliminary , the hope is that this work leads to methods that can achieve class - independent segmentation of arbitrary sounds , with additional application to image segmentation and other domains .we define as a raw input signal , such as an image or a time - domain waveform , and as a feature vector indexed by an element . in the case of images, typically may be a superpixel index and some vector - valued features of that superpixel ; in the case of audio signals , may be a time - frequency index , where indexes frames of the signal and frequencies , and the value of the complex spectrogram at the corresponding time - frequency bin .we assume that there exists a reasonable partition of the elements into regions , which we would like to find , for example to further process the features separately for each region . in the case of audio source separation ,for example , these regions could be defined as the sets of time - frequency bins in which each source dominates , and estimating such a partition would enable us to build time - frequency masks to be applied to , leading to time - frequency representations that can be inverted to obtain isolated sources . to estimate the partition , we seek a -dimensional embedding , parameterized by , such that performing some simple clustering in the embedding space will likely lead to a partition of that is close to the target one . in this work, is based on a deep neural network that is a global function of the entire input signal ( we allow for a feature extraction step to create the network input ; in general , the input features may be completely different from ) .thus our transformation can take into account global properties of the input , and the embedding can be considered a permutation- and cardinality - independent encoding of the network s estimate of the signal partition .here we consider a unit - norm embedding , so that , where and is the value of the -th dimension of the embedding for element .we omit the dependency of on to simplify notations .the partition - based training requires a reference label indicator , mapping each element to each of arbitrary partition classes , so that if element is in partition . for a training objective ,we seek embeddings that enable accurate clustering according to the partition labels . to do this , we need a convenient expression that is invariant to the number and permutations of the partition labels from one training example to the next .one such objective for minimization is where is a weighted frobenius norm , with , where is an vector of partition sizes : that is , . inthe above we use the fact that .intuitively , this objective pushes the inner product to 1 when and are in the same partition , and to when they are in different partitions . alternately , we see from ( [ eq : obj_like_kmeans ] ) that it pulls the squared distance to 0 for elements within the same partition , while preventing the embeddings from trivially collapsing into the same point .note that the first term is the objective function minimized by -means , as a function of cluster assignments , and in this context the second term is a constant .so the objective reasonably tries to lower the -means score for the labeled cluster assignments at training time .this formulation can be related to spectral clustering as follows .we can define an ideal affinity matrix , that is block diagonal up to permutation and use an inner - product kernel , so that is our affinity matrix .our objective becomes , which measures the deviation of the model s affinity matrix from the ideal affinity .note that although this function ostensibly sums over all pairs of data points , the low - rank nature of the objective leads to an efficient implementation , defining : which avoids explicitly constructing the affinity matrix . in practice, is orders of magnitude greater than , leading to a significant speedup . to optimize a deep network , we typically need to use first - order methods. fortunately derivatives of our objective function with respect to are also efficiently obtained due to the low - rank structure : this low - rank formulation also relates to spectral clustering in that the latter typically requires the nystrm low - rank approximation to the affinity matrix , for efficiency , so that the singular value decomposition ( svd ) of an matrix can be substituted for the much more expensive eigenvalue decomposition of the normalized affinity matrix . rather than following spectral clustering in making a low - rank approximation of a full - rank model , our method can be thought of as directly optimizing a low - rank affinity matrix so that processing is more efficient and parameters are tuned to the low - rank structure . at test time , we compute the embeddings on the test signal , and cluster the rows , for example using -means .we also alternately perform a spectral - clustering style dimensionality reduction before clustering , starting with a singular value decomposition ( svd ) , , of normalized , where , sorted by decreasing eigenvalue , and clustering the normalized rows of the matrix of principal left singular vectors , with the row given by $ ] , similar to .we evaluate the proposed model on a speech separation task : the goal is to separate each speech signal from a mixture of multiple speakers . while separating speech from non - stationary noiseis in general considered to be a difficult problem , separating speech from other speech signals is particularly challenging because all sources belong to the same class , and share similar characteristics .mixtures involving speech from same gender speakers are the most difficult since the pitch of the voice is in the same range .we here consider mixtures of two speakers and three speakers ( the latter always containing at least two speakers of the same gender ) .however , our method is not limited in the number of sources it can handle or the vocabulary and discourse style of the speakers . to investigate the effectiveness of our proposed model , we built a new dataset of speech mixtures based on the wall street journal ( wsj0 ) corpus , leading to a more challenging task than in existing datasets .existing datasets are too limited for evaluation of our model because , for example , the speech separation challenge only contains a mixture of two speakers , with a limited vocabulary and insufficient training data . the sisec challenge ( e.g. , )is limited in size and designed for the evaluation of multi - channel separation , which can be easier than single - channel separation in general .a training set consisting of 30 hours of two - speaker mixtures was generated by randomly selecting utterances by different speakers from the wsj0 training set ` si_tr_s ` , and by mixing them at various signal - to - noise ratios ( snr ) between 0 db and 5 db .we also designed the two training subsets from the above whole training set ( _ whole _ ) , one considered the balance of the mixture of the genders ( _ balanced _ , 22.5 hours ) , and the other only used the mixture of female speakers ( _ female _ , 7.5 hours ) .10 hours of cross validation set were generated similarly from the wsj0 training set , which is used to optimize some tuning parameters , and to evaluate the source separation performance of the closed speaker experiments ( * closed speaker set * ) .5 hours of evaluation data was generated similarly using utterances from sixteen speakers from the wsj0 development set ` si_dt_05 ` and evaluation set ` si_et_05 ` , which are based on the different speakers from our training and closed speaker sets ( * open speaker set * ) .note that many existing speech separation methods ( e.g. , ) can not handle the open speaker problem without special adaptation procedures , and generally require knowledge of the speakers in the evaluation .for the evaluation data , we also created 100 utterances of three - speaker mixtures for each closed and open speaker set as an advanced setup .all data were downsampled to 8 khz before processing to reduce computational and memory costs .the input features were the log short - time fourier spectral magnitudes of the mixture speech , computed with a 32 ms window length , 8 ms window shift , and the square root of the hann window . to ensure the local coherency, the mixture speech was segmented with the length of 100 frames , roughly the length of one word in speech , and processed separately to output embedding based on the proposed model .the ideal binary mask was used to build the target when training our network .the ideal binary mask gives ownership of a time - frequency bin to the source whose magnitude is maximum among all sources in that bin .the mask values were assigned with 1 for active and 0 otherwise ( binary ) , making as the ideal affinity matrix for the mixture . to avoid problems due to the silence regions during separation , a binary weight for each time - frequency bin was used during the training process , only retaining those bins such that each source s magnitude at that bin is greater than some ratio ( arbitrarily set to -40 db ) of the source s maximum magnitude .intuitively , this binary weight guides the neural network to ignore bins that are not important to all sources .networks in the proposed model were trained given the above input and the ideal affinity matrix .the network structure used in our experiments has two bi - directional long short - term memory ( blstm ) layers , followed with one feedforward layer .each blstm layer has 600 hidden cells and the feedforward layer corresponds with the embedding dimension ( i.e. , ) .stochastic gradient descent with momentum 0.9 and fixed learning rate was used for training . in each updating step , a gaussian noise with zero mean and 0.6 variance was added to the weight .we prepared several networks used in the speech separation experiments using different embedding dimensions from to .in addition , two different activation functions ( logistic and tanh ) were explored to form the embedding with different ranges of . for each embedding dimension, the weights for the corresponding network were initialized randomly from the scratch according to a normal distribution with zero mean and 0.1 variance with the tanh activation and _ whole _ training set . in the experiments of a different activation ( logistic ) anddifferent training subsets ( _ balanced _ and _ female _ ) , the network was initialized with the one with the tanh activation and _ whole _ training set .the implementation was based on currennt , a publicly available training software for dnn and ( b)lstm networks with gpu support ( ) . in the test stage ,the speech separation was performed by constructing a time - domain speech signal based on time - frequency masks for each speaker .the time - frequency masks for each source speaker were obtained by clustering the row vectors of embedding , where was outputted from the proposed model for each segment ( 100 frames ) , similarly to the training stage .the number of clusters corresponds to the number of speakers in the mixture .we evaluated various types of clustering methods : -means on the whole utterance by concatenating the embeddings for all segments ; -means clustering within each segment ; spectral clustering within each segment . for the within - segment clusterings , one needs to solve a permutation problem , as clusters are not guaranteed to be consistent across segments .for those cases , we report oracle permutation results ( i.e. , permutations that minimize the distance between the masked mixture and each source s complex spectrogram ) as an upper bound on performance .one interesting property of the proposed model is that it can potentially generalize to the case of three - speaker mixtures without changing the training procedure in section [ sec : train ] . to verify this , speech separation experiments on three - speaker mixtures were conducted using the network trained with two speaker mixtures , simply changing the above clustering step from 2 to 3 clustersof course , training the network including mixtures involving more than two speakers should improve performance further , but we shall see that the method does surprisingly well even without retraining . as a standard speech separation method ,supervised sparse non - negative matrix factorization ( snmf ) was used as a baseline . while snmf may stand a chance separating speakers in male - female mixtures when using a concatenation of bases trained separately on speech by other speakers of each gender, it would not make sense to use it in the case of same - gender mixtures . to give snmf the best possible advantage, we use an oracle : at test time we give it the basis functions trained on the actual speaker in the mixture . for each speaker, 256 bases were learned on the clean training utterances of that speaker .magnitude spectra with 8 consecutive frames of left context were used as input features . at test time, the basis functions for the two speakers in the test mixture were concatenated , and their corresponding activations computed on the mixture .the estimated models for each speaker were then used to build a wiener - filter like mask applied to the mixture , and the corresponding signals reconstructed by inverse stft .for all the experiment , performance was evaluated in terms of averaged signal - to - distortion ratio ( sdr ) using the ` bss_eval ` toolbox .the initial sdr averaged over the mixtures was db for two speaker mixtures and db for three speaker mixtures ..sdr improvements ( in db ) for different clustering methods .[ cols="<,^,^ " , ]as shown in table [ tab : k40 ] , both the oracle and non - oracle clustering methods for proposed system significantly outperform the oracle nmf baseline , even though the oracle nmf is a strong model with the important advantage of knowing the speaker identity and has speaker - dependent models . for the proposed system the open speaker performance is similar to the closed speaker results , indicating that the system can generalize well to unknown speakers , without any explicit adaptation methods . for different clustering methods ,the oracle -means outperforms the oracle `` spectral clustering '' by db showing that the embedding represents centralized clusters . to be fair ,what we call spectral clustering here is using our outer product kernel instead of a local kernel function such as a gaussian , as commonly used in spectral clustering .however a gaussian kernel could not be used here due to computational complexity . also note that the oracle clustering method in our experiment resolves the permutation of two ( or three in table [ tab : three_spk ] ) speakers in each segment . in the dataset, each utterance usually contains 6 segments so the permutation search space is relatively small for each utterance .hence this problem may have an easy solution to be explored in future work .for the non - oracle experiments , the whole utterance clustering also performs relatively well compared to baseline .given the fact that the system was only trained with individual segments , the effectiveness of the whole utterance clustering suggests that the network learns features that are globally important , such us pitch , timbre etc . in table[ tab : diff_k ] , the system completely fails , either because optimization of the current network architecture fails , or the embedding fundamentally requires more dimensions .the performance of , , are similar , showing that the system can operate in a wide range of parameter values .we arbitrarily used tanh networks in most of the experiments because the tanh network has larger embedding space than logistic network .however , in table [ tab : diff_k ] , we show that in retrospect the logistic network performs slightly better than the tanh one . in table[ tab : gender ] , since the female and male mixture is an intrinsically easier segmentation problem , the performance of mixture between female and male is significantly better than the same gender mixtures for all situations . as mentioned in section [ sec : exp ] , the random selection of speaker would also be a factor for the large gap . with more balanced training data ,the system has better performance for the same gender separation with a sacrifice of its performance for different gender mixture .if we only focus on female mixtures , the performance is still better .figure [ fig : embed ] shows an example of embeddings for two different mixtures ( female - female and male - female ) , in which a few embedding dimensions are plotted for each time - frequency bin in order to show how they are sensitive to different aspects of each signal . in table[ tab : three_spk ] , the proposed system can also separate the mixture of three speakers , even though it is only trained on two - speaker mixtures . as discussed in previous sections , unlike many separation algorithms , deep clusteringcan naturally scale up to more sources , and thus make it suitable for many real world tasks when the number of sources is not available or fixed .figure [ fig:3speakers ] shows one example of the separation for three speaker mixture in the open speaker set case .note that we also did experiments with mixtures of three fixed speakers for the training and testing data , and the sdr improvement of the proposed system is .deep clustering has been evaluated in a variety of conditions and parameter regimes , on a challenging speech separation problem .since these are just preliminary results , we expect that further refinement of the model will lead to significant improvement .for example , by combining the clustering step into the embedding blstm network using the deep unfolding technique , the separation could be jointly trained with embedding and lead to potential better result .also in this work , the blstm network has a relatively uniform structure .alternative architectures with different time and frequency dependencies , such as deep convolutional neural networks , or hierarchical recursive embedding networks , could also be helpful in terms of learning and regularization . finally , scaling up training on databases of more disparate audio types , as well as applications to other domains such as image segmentation, would be prime candidates for future work .
|
we address the problem of acoustic source separation in a deep learning framework we call `` deep clustering '' . rather than directly estimating signals or masking functions , we train a deep network to produce spectrogram embeddings that are discriminative for partition labels given in training data . previous deep network approaches provide great advantages in terms of learning power and speed , but previously it has been unclear how to use them to separate signals in a class - independent way . in contrast , spectral clustering approaches are flexible with respect to the classes and number of items to be segmented , but it has been unclear how to leverage the learning power and speed of deep networks . to obtain the best of both worlds , we use an objective function that to train embeddings that yield a low - rank approximation to an ideal pairwise affinity matrix , in a class - independent way . this avoids the high cost of spectral factorization and instead produces compact clusters that are amenable to simple clustering methods . the segmentations are therefore implicitly encoded in the embeddings , and can be `` decoded '' by clustering . preliminary experiments show that the proposed method can separate speech : when trained on spectrogram features containing mixtures of two speakers , and tested on mixtures of a held - out set of speakers , it can infer masking functions that improve signal quality by around 6db . we show that the model can generalize to three - speaker mixtures despite training only on two - speaker mixtures . the framework can be used without class labels , and therefore has the potential to be trained on a diverse set of sound types , and to generalize to novel sources . we hope that future work will lead to segmentation of arbitrary sounds , with extensions to microphone array methods as well as image segmentation and other domains .
|
the aei 10 m prototype is an ultra - low displacement noise facility , incorporating a large ultra - high vacuum system , excellent seismic isolation and a well - stabilized high - power laser source , and is intended to host a variety of interferometry experiments .one of these experiments is planned to be a fabry - prot michelson interferometer which is intended to operate at a purely quantum noise limited sensitivity in its detection band at hundreds of hertz . at a frequency of approximately 200hzthis instrument will be capable of reaching the standard quantum limit ( sql ) of optical interferometry for 100 g mirrors . by creating quantum correlations within this interferometer , e.g. by injecting squeezed vacuum , this limitcan then even be surpassed .this will allow operating the interferometer at sub - sql sensitivity , a state of operation which has to date not been reached by any interferometry experiment .a schematic drawing of the original interferometer conceptual design configuration , which in the following we will refer to as the `` target configuration '' , is shown in figure [ fig : schematicsensitivitycombined ] , along with the anticipated noise spectral densities .it is evident that in the design of an instrument to reach the sql , quantum noise must dominate over the sum of classical contributions which must be minimized . the employment of advanced technologies , such as monolithic all - silica suspensions and ultra - low loss optics , as well as a rigorous optimization of all relevant parameters is obligatory to reduce the individual types of thermal noise to a tolerable level . as in the case of large - scale advanced gravitational wave ( gw ) detectors , coating brownian thermal noiseis identified to be the most prominent classical noise source in the noise budget of the aei 10 m sub - sql interferometer .techniques have been developed to further increase the sensitivity of future gw detectors .these techniques , which are the subject of ongoing research , include modification of the optics ( e.g. tio-doping of tantala / silica coatings , or the use of waveguides instead of dielectric mirrors ) , changes in the optical technologies ( e.g. interferometry with higher order optical modes such as the lg mode ) , and cryogenic cooling of the optics .this article reports on a stepwise approach to reducing coating thermal noise by iteratively enlarging the beam spots on the interferometer s arm cavity optics towards the technically feasible maximum .this goes hand in hand with pushing the arm cavities towards their geometric stability boundary .typically , the radii of the laser beams on the interferometer optics are chosen much smaller than optics radii to avoid excessive diffraction loss , and ensure stability of the optical mode . however , the larger the mirror surface area which is illuminated , the smaller the resulting coating thermal noise contribution .this reflects in the coating thermal noise theoretical model given in .the use of extremely large laser beam spots is a key feature in the target configuration of the aei 10 m sub - sql interferometer to reduce coating thermal noise below quantum noise level .this instrument is planned to be operated with beam spots with an equal radius of mm on all cavity mirrors , which have a radius of mm . in this sensewe regard our proposed setup as an intermediate , simplified configuration to pave the way to eventually building and operating our target configuration described in .the attempt to operate a fabry - prot michelson interferometer with extremely large beam spots on the cavity mirrors inevitably comes at the expense of poor resonator stability .the notion of stability of an optical resonator is closely connected to the existence of low - loss cavity eigenmodes . with the aid of the formalism introduced in can quantify the stability of an optical resonator as a function of the mirrors radii of curvature and their spatial separation .this measure is commonly referred to as the cavities _ g - factor_. the problem that arises from marginally stable ( i.e. ) optical resonators is that even small - scale length perturbations or mirror curvature error can render the instrument unstable . in an unstable resonator the property of self - consistency ( i.e. periodic re - focussing of the internal beam travelling back and forth between the mirrors ) of stable cavitiesis violated .a substantial fraction of the light is therefore lost for the interferometric measurement , preventing the internal light field from fully building up , which would be crucial to reaching the design sensitivity of the interferometer .furthermore , contrasting the case of stable resonators , heterodyne length control signals have been found to change their characteristics in marginally stable cavities .this has implications for lock acquisition , as the transient signals present during that process are altered by spurious offsets which arise when the cavity leaves the stable regime ( e.g. if a control actuator to adjust the cavity length imposes the tiniest amount of rotation on a mirror ) . even during a long lock, a brief disturbance could lead to instability which would produce offsets in the error signals of the control loops .when this happens the control loops will command inappropriate corrections which may throw the system out of lock .our laboratory environment , basically the vacuum system , imposes space constraints on the minimum and maximum arm length of our interferometer . in this respect , for the target configuration shown in figure [ fig : schematicsensitivitycombined ] , typical arm cavity lengths are of the order m . another boundary condition with an impact on cavity lengths and mirror radii of curvature is the requirement for beam spots with a designated radius of mm , which stems from a trade - off between low coating thermal noise and diffraction loss . obeying these boundary conditions , calculations yield an arm cavity g - factor of typically .for such a configuration , a cavity length or a radius of curvature ( roc ) error of only a few mm would be sufficient to render the cavity unstable . for the purpose of comparison , typical arm cavity stabilities of large scale interferometric gw detectors and the aei 10 m sub - sql interferometerare summarized in table [ table : typicalstabilities ] .*15 & & & + & & & + & cavity length & input mirror & end mirror & cavity g - factor + advanced ligo & 3996 m & 1934 m & 2245 m & 0.832 + advanced virgo & 3000 m & 1420 m & 1683 m & 0.871 + et - b & 10000m&5070m&5070 m & 0.945 + _ sub - sql ifo simplified design : _ + initial configuration & 10.8 m & 5.7 m & 5.7 m & 0.8 + marginally stable configuration & 11.3952 m & 5.7 m & 5.7 m & 0.998 +in this article we propose a _ stepwise approach towards the final beam size _ in order to ease the commissioning of the aei 10 m sub - sql interferometer .this approach will allow us to initially learn how to operate the interferometer with relatively small beam spots on the cavity optics and therefore more comfortable arm cavity stability .after having established stable operation in the initial configuration and gathering the required experience , we can then approach the marginally stable configuration by iteratively enlarging the beam size on the main mirrors towards its design value . for this stepwise approach to be feasible, it is crucial to find a way of increasing the beam size that does not require any major hardware changes , such as for instance replacing main optics .it would for example be impractical and too cost intensive to adjust the beam size in the arm cavities by swapping the main mirrors with ones with a different roc , especially as the mirrors feature monolithic suspension systems .m. the right end of the plot characterizes the marginally stable configuration of the aei 10 m sub - sql interferometer which features extremely large beam spots and a g - factor close to instability .however , using exactly the same mirrors , but an arm cavity length shortened to m we can reduce the g - factor to a comfortable value of while at the same time reducing the beam size of mm to .,scaledwidth=100.0% ] however , as the aei prototype infrastructure provides sufficient space to shift the positions of the main mirrors by up to about 1 m or 10% of the arm cavity length , we have the possibility to reduce the beam size on the optics without adjusting the main mirror roc , but by initially shortening the arm cavity length .let us assume design values for the arm cavity length of m and radii of curvature of the input mirrors ( i m ) and end mirrors ( em ) high reflective - coated ( hr ) surfaces of m .such an arm cavity would have a g - factor of .as shown in figure [ fig : gfactor_vs_aclength ] we can achieve a comfortable g - factor of with exactly the same mirrors by just shortening the distance between the input and end mirror by about 0.6 m to a total arm cavity length of m .such a shortening of the arm cavity length corresponds to reducing the beam size on the main mirrors from the targeted value of mm to an initial beam size of only mm ( see lower right subplot of figure [ fig : gfactor_vs_aclength ] ) .m arm cavity length .the quantum noise ( red ) is independent of the arm cavity length . with the initial configurationwe expect to be able to directly measure the coating brownian noise in the frequency range between 100hz and 1khz , while in the marginally stable configuration with large beam spots thermal noise contributions will be significantly below the quantum noise ., scaledwidth=80.0% ] starting with the m configuration will not only be advantageous for commissioning of the interferometer and noise hunting , but will also allow us to directly measure coating brownian noise .figure [ fig : sensitivity_tn ] shows the fundamental noise limits of the simplified aei 10 m sub - sql interferometer design for the marginally stable configuration with m arm length and the initial configuration with m long arm cavities . since the beam size on all main mirrors is different by about a factor 3.5 between the two arm cavity lengths , the coating brownian noise will scale accordingly , thus offering us the possibility to directly measure coating brownian noise at frequencies between about 100hz and 1khz with the initial configuration .this is an interesting opportunity to verify the coating brownian noise level at frequencies around 200hz , which is the frequency range where coating brownian noise is most important for the advanced gw detectors , and which has so far not been accessible by direct measurement .one of the major steps for improving the sensitivity from the first to second generation gw detectors was to significantly increase the beam size on the main test masses , especially at the input mirrors .if larger mirror substrates become available , future upgrades to these advanced detectors might include even further increased beam sizes on the mirrors in order to reduce the influence of thermal noise contributions .this would require to operate the arm cavities with g - factors even higher than the ones stated in table [ table : typicalstabilities ] .the experience we will gain from the aei 10 m sub - sql interferometer by step - wise approaching the cavity g - factor of , will allow us to study destabilizing effects and stability limitations for related experiments .these results combined with reliable simulations can be at least partially transfered to upgrades of second generation gw detectors as well as to third generation gw detectors and may provide guidance in determining maximally feasible beam sizes for these instruments .unlike the typical scenario in which fabry - prot michelson interferometers are applied , in which the arm cavity geometry is not changed during the lifetime of the experiment , the primary goal for the aei 10 m sub - sql interferometer optical design is to identify a configuration which fulfills the requirement of tunable arm cavity stability or , synonymously , which can be operated equally well for different beam spot sizes on the cavity mirrors ( cf .section [ sec : motistep ] ) .m. the reflected beam is directed into the interferometer where it is matched to the arm cavities fundamental eigenmodes by means of curved arm cavity input mirror ar surfaces . *right : * moving the arm cavities end mirrors alters the fundamental arm cavity eigenmode . for our starting configuration , which features shorter arm cavities than the marginally stable design configuration , we find a larger beam waist at a shorter distance from the input mirror as well as smaller beam spots on both cavity mirrors.,scaledwidth=95.0% ] owing to the fact that each iteration step , at discrete arm cavity lengths , will feature distinct cavity eigenmodes , it follows that the implementation of a flexible mode matching scheme is the most elegant approach to solving this problem .the performance goals of the instrument require close to optimal mode matching of the cavities at all times . the starting point for our proposed optical configurationis the generation of a collimated laser beam with tunable radius .this forms the input beam to the interferometer and is matched into the arm cavity eigenmodes by curved rear ( antireflection coated ) surfaces on the substrates of the input mirrors .a schematic drawing of this concept is depicted in the left pane of figure [ fig : gaussschematiceigenmodecombined ] . from the technical point of view , a collimated beam can easily be prepared by including a curved mirror into the input optics chain and by choosing the mirror s roc and its distance to the input beam waist appropriately . to minimizethe astigmatism introduced by the collimating mirror , the opening angle between the incident and the reflected beam should be as small as possible .this can be achieved by increasing the distance of beam propagation of the incoming and outgoing beam , e.g. by positioning the collimating mirror near one of the interferometer arm cavity end mirrors . using a collimated input beamhas a number of advantages over using a diverging beam .it is generally desired to have a high level of symmetry of the interferometer arms because this has a high impact on the intrinsic cancellation of common mode perturbations at the beam splitter . on the other hand , to provide transmission of rf control sidebands to the detection port of the interferometer , which is typically locked on or very close to a dark fringe , it is necessary to introduce a macroscopic offset in the path lengths between the two arm cavity input mirrors and the beam splitter .this offset is referred to as the _ schnupp asymmetry _ . for a non - collimated inputbeam the propagation over unequal path lengths would lead to beam parameters which were different on the parallel and the perpendicular arm cavity ims .if perfect mode matching were to be achieved for both arm cavities , this configuration would require either to include additional optics , or to have different radii of curvatures for the ar surfaces on the input mirrors . a further benefit of using a collimated input beam is reduction of astigmatism introduced at the beam splitter .m , the plot on the right hand side illustrates the situation for the starting configuration with a reduced arm cavity length of m . in both mapsthe mode matching efficiency is color coded as a function of the radius of the collimated interferometer input beam and the roc of the ar surface of the arm cavity input mirrors . by holding the i m ar surface roc constant and changing the input beam radius only a mode matching efficiency for the short arm cavity configuration can be obtained which is degraded by approximately 1% of the ( theoretically perfect ) matching efficiency of the marginally stable configuration .all numerical investigations were carried out by means of the matrix formalism introduced in as well as the interferometer simulation software _ finesse _ .,scaledwidth=100.0% ] a natural measure to benchmark our proposed layout , especially with respect to the flexibility of the mode matching scheme , is the theoretical mode coupling efficiency for the extreme cases , i.e. the initial configuration and the marginally stable configuration .the mode matching efficiency , which is referred to several times throughout this article , is a measure of the coupling of optical power into a fundamental cavity eigenmode .it is defined as the normalized overlap integral of the tem mode of the laser beam and the fundamental cavity eigenmode .the modes and are fully determined by the complex beam parameters and of the input beam and eigenmode at an arbitrary position along the beam axis of propagation .the calculated efficiency can be regarded as an upper bound to the practically achievable mode matching quality .we consider the marginally stable configuration as the reference , meaning that in our analysis all relevant parameters are chosen with respect to achieving perfect mode matching for this configuration . by keeping all parameters , except for the arm cavity length , constant we can quantify the degradation of the mode matching and identify possibilities for the recovery of the mode matching as well as limits to the degree by which it is recoverable . in our case , the mode matching efficiencyis determined by two parameters , the radius of the collimated input beam as well as the roc of the arm cavity input mirrors ar surfaces . for the marginally stable configuration with an arm cavity length of m and mirror high reflectivity ( hr ) surface radii of curvature of mwe find an input beam radius of mm and an i m ar surface roc of m to result in a perfect mode matching to the arm cavities .ideally , in the real interferometer the arm cavity length is changed in each iteration step by moving the end mirrors only ; this we adopt as a further boundary condition for the stepwise cavity length tuning . if we now keep the optimal values of the marginally stable configuration for all parameters , except for the arm cavity length which we reduce to a value of m by shifting the end mirror towards the input mirror , we observe a substantial decrease of the mode matching efficiency .this scenario corresponds to setting up the initial configuration with improved stability with optics that are optimized for marginally stable operation .owing to the fact that the roc of the i m ar surfaces can not be easily changed in practice , this value is to be considered a constant for all length iteration steps . on the contrary, the radius of the collimated input beam can be tuned to recover the beam matching to the cavities eigenmodes . according to this , by tuning the radius of the collimated input beam down to mm the matching efficiency forthe configuration with shortened arms can be % , albeit the limitation of the radius of the collimated input beam being the only parameter available for optimization .aspects of the technical realization of a tunable collimated input beam are addressed in section [ sec : op_requirements ] . based on experience gained from earlier experiments we consider a degradation of the mode matching efficiency of not more than 1% ( with respect to the perfectly matched case ) tolerable in the sense that this is likely to have a negligible impact on the performance of the instrument .a more elaborate estimation of this requirement based on a detailed noise analysis is subject to future work .the mode matching efficiency as a function of the radius of the collimated input beam and the roc of the arm cavity input mirror ar surfaces for the two arm cavity length extremes is shown in figure [ fig : vismapscombined ] .m was implicitly assumed .whereas in theory perfect mode matching can be achieved for the marginally stable design configuration , a mode matching efficiency of up to % is theoretically feasible for the starting configuration with shorter arm cavities.,scaledwidth=55.0% ] the residual degradation in the short arm cavity case can be attributed to a waist position mismatch within the cavities , which can not be compensated by tuning the input beam radius .this is due to the fact that the focal distance of the curved i m ar surface for the collimated input beam is constant whereas the position of the waist of the arm cavity eigenmode is a function of the cavity length . due to the symmetry of the configuration, the waist position moves towards the ims by half the length change .this is illustrated in the right pane in figure [ fig : gaussschematiceigenmodecombined ] .the evolution of the mode matching efficiency as a function of the collimated input beam radius and the arm cavity length is shown in figure [ fig : mm_map_differentlengths ] .in our case , besides the introduction of the curved collimating mirror , the input optics chain needs to be extended by optics to implement the required feature of radius tunability of the collimated beam .the notion of _ input optics _ commonly summarizes the optical elements which serve the purpose to deliver a pure , well - aligned beam with the optimal geometry to the interferometer .it is possible to conceive various approaches to implementing adjustable modematching in the input chain .obvious examples include exchanging the collimating mirror in each iteration step and introducing a beam expanding telescope in the collimated beam path , but these turn out to be poor choices . whereas the former option depends on the time - consuming task of replacing a suspended optic and gives rise to a complicated re - alignment procedure in each iteration step , the latter , likewise , requires frequent swapping of optics andmay furthermore be an additional source of optical aberrations .our preferred method of input beam shaping is to tune the waist radius of the _ initial _ beam , while keeping the waist position constant , prior to its reflection at the collimating mirror .this can , for instance , be accomplished by means of a beam telescope in combination with a beam expander , which consist of lenses or mirrors .these can easily be shifted on the optical table for fine tuning .the use of active optics may help to avoid the need to exchange fixed focal length optical elements .a matter closely related to the stable operation of the tunable length interferometer is the sub - area of sensing and control of the optical degrees of freedom of the instrument . typically , rf modulation based heterodyne length signal extraction schemes are employed , which require one or more electronic local oscillators as signal sources , whose frequencies are optimized with respect to cavity lengths within the interferometer to be controlled . in our casethe cavity length tunability may require a flexible rf modulation scheme .however , a detailed treatment of this topic , which can be regarded as a technical issue , rather than fundamental , is beyond the scope of this article and shall be discussed elsewhere .parameters in the optical layout may deviate from their designated values for a variety of reasons , e.g. due to fabrication tolerances , environment - induced drifts or the nature of the experimental apparatus itself .ideally , the interferometer design should exhibit a high level of immunity to tolerances in its constituting parameters .practically we find imperfections in the optical elements and inaccuracies in the optical setup to degrade the performance of the interferometer or , in the worst case , to even render the instrument inoperable . on the basis of the schematic shown in figure [ fig : gaussschematiceigenmodecombined ] we can identify design parameters which have a direct impact on the maximally achievable mode matching efficiency .these are : the initial beam waist radius as well as its position ( defined by the eigenmode of the triangular cavity in figure [ fig : gaussschematiceigenmodecombined ] ) , the roc of the collimating mirror as well as its position on the table and the roc of the ims ar surfaces . in this sectionwe will investigate the impact of deviations of these parameters from their design values .this knowledge can in turn be utilized to formulate specifications for the required manufacturing precision for the optics .provision of an initial beam with well - defined beam parameters is crucial to meet the requirement for a well - collimated interferometer input beam with a specific radius for each cavity length iteration step .a mismatch of the actual parameters of the initial beam with respect to the ideal ones is likely to have a direct impact on the mode matching quality .the dependence of the arm cavity mode matching efficiency on the initial beam waist position is depicted in the top left plot in figure [ fig:2x2_tolerancing ] .clearly , a deviation of the waist position along the optical axis can be compensated by shifting the position of the collimating mirror by the same amount .it is noteworthy that shifting the collimating mirror position simultaneously alters the length of the incoming as well as the outgoing beam path .nevertheless , due to the reflected beam being collimated , this coupling of the two lengths is neutralized in first order .consequently , the quality of the mode matching is mostly insensitive to length changes in this path .small deviations of the initial beam waist radius from the optimum can be found to have a negligible effect on the mode coupling efficiency , cf .bottom left plot in figure [ fig:2x2_tolerancing ] .a deviation of % in results in a degradation of the mode matching efficiency of less than 0.5% . can be compensated by shifting the position of the collimating mirror on the table . *top right : * likewise , imperfections of the collimating mirror roc can be compensated by shifting the mirror s position . *bottom left : * the mode matching efficiency exhibits fairly low susceptibility to deviations from the optimal initial beam waist radius .a deviation of % in results in a mode matching efficiency degradation of less than 0.5% .note that none of the configurations , except for the marginally stable one , reaches perfect mode matching .* bottom right : * the susceptibility to i m ar surface roc error increases with the arm cavity length approaching the marginally stable case . in the marginally stable configuration a deviation of comes at the expense of a mode matching efficiency degradation of . , scaledwidth=100.0% ] the effect of roc imperfections of the collimating mirror as well as a possible workaround is illustrated in the top right plot in figure [ fig:2x2_tolerancing ] .a roc error results in a non - optimal focal length of the mirror . the focal length , in turn , is required to match the distance to the initial beam waist to perfectly collimate the beam in reflection .again , the collimating mirror position can be shifted to compensate this type of imperfection . the same argument of length offsets in the reflected beam path being negligible ( see previous section ) holds here , too .alternatively , instead of shifting the mirror position , the roc could e.g. be thermally actuated upon .the susceptibility of the mode matching efficiency to roc imperfections of the ims ar surfaces is illustrated in the lower right plot in figure [ fig:2x2_tolerancing ] .it becomes evident that whereas for the starting setup the arm cavity mode matching shows comparatively low susceptibility to this type of imperfection , the effect increases as we approach the marginally stable configuration arm cavity length .while for the initial configuration it takes a roc error of % to degrade the mode matching by % , for the marginally stable setup we find a roc deviation of to result in a mode matching efficiency degradation of .a mode matching efficiency of % for all configurations , including the marginally stable one , could be achieved by means of an i m ar surface roc error lower than % , which corresponds to an absolute roc error of mm . unlike the cases discussed previously , for the i m ar surface roc there is no well - decoupled degree of freedom in the instrument available that can be utilized to easily compensate an error in this parameter .direct thermal actuation does not pose a suitable solution as the radii of curvature on both sides of the mirror would be affected simultaneously , leading to an unwanted distortion of the cavity eigenmode .however , depending on its nature , a residual roc error in both i m ar surfaces could be tackled by different means : a `` common mode '' roc error ( i.e. the sign of both roc deviations , for the parallel and the perpendicular interferometer arm i m , is identical ) of both ims could be compensated by slightly tuning the divergence angle of the interferometer input beam .this could be achieved by means of shifting the collimating mirror from its optimal position or actuating on its roc ( e.g. thermally ) .the pivotal point of this approach is to trade waist radius error for waist position error , the latter of which the cavity mode matching efficiency is generally less susceptible to .if , for instance , in the marginally stable configuration the actual i m ar surface roc turns out to be smaller by % with respect to its optimal value of m , the mode matching efficiency can be recovered to % by increasing the roc of the collimating mirror . for typical beam path lengths in the collimating stagethe required change of the collimating mirror roc would be of the order of tens of centimeters .alternatively , the same can be achieved by shifting the initial beam waist out of the focal point of the collimating mirror , with an offset of the same order as the previously described collimating mirror roc change .a `` differential '' roc error is in general harder to handle but could , if absolutely necessary , be compensated by introducing additional optical elements in the central michelson arms , i.e. between the beam splitter and the arm cavity ims .as mentioned previously , first and foremost these compensation techniques are relevant for configurations very close or at the marginally stable arm cavity length , only if mirror ar surface roc fabrication errors turn out to be larger than desired . for the larger part of the operation modes , in terms of differen cavity lengths , no such measures need to be taken .in this article we have described a detailed optical layout for the aei 10 m sub - sql interferometer based on a robust procedure to bring the interferometer to its final configuration with marginally stable arm cavities . starting with the arm cavities set to be shorter than eventually required , but with all other parameters unchanged ,significantly increased stability of the arm cavity eigenmode may be obtained .this is desirable to allow initial commissioning of the aei 10 m sub - sql interferometer .a step - by - step approach to the final cavity mode is proposed . in order to realize a close - to - optimal mode matching to the arm cavities , over the whole range of spot sizes ,we employ a collimated beam of variable size in combination with input mirror substrates with curved front and rear sides .we found that the mode matching for different arm cavity lengths can be nearly completely recovered by changing the size of the incident laser beam , while the associated change of the eigenmode waist position only degrades the mode matching on the sub - percentage level .the robustness analysis that was performed shows that the most stringent requirements for manufacturing accuracy are imposed by the curvatures of the input mirror rear surfaces , while deviations from all other design parameters are either mostly uncritical or can easily be compensated for by changing of free parameters .we have also pointed out that several aspects of the work presented in this article are of interest for the wider community , such as for instance the possibility to directly measure coating brownian noise with the aei 10 m sub - sql interferometer at frequencies around 200hz . moreover, the proposed optical layout will allow us to determine how close to the instability one can realistically operate the arm cavities of a fabry - prot michelson interferometer , which is one of the key - questions for future gw detectors .future work will include the derivation of mirror polishing ( and coating ) requirements for the marginally stable arm cavities using numerical simulations with mirror maps .in addition to this , we plan to analyze beam jitter requirements as well as laser noise couplings .
|
the sensitivity of high - precision interferometric measurements can be limited by brownian noise within dielectric mirror coatings . this occurs , for instance , in the optical resonators of gravitational wave detectors where the noise can be reduced by increasing the laser beam size . however , the stability of the resonator and its optical performance often impose a limit on the maximally feasible beam size . in this article we describe the optical design of a 10 m fabry - prot michelson interferometer with tunable stability . our design will allow us to carry out initial commissioning with arm cavities of high stability , while afterwards the arm cavity length can be increased stepwise towards the final , marginally stable configuration . requiring only minimal hardware changes , with respect to a comparable `` static '' layout , the proposed technique will not only enable us to explore the stability limits of an optical resonator with realistic mirrors exhibiting inevitable surface imperfections , but also the opportunity to measure coating brownian noise at frequencies as low as a few hundred hertz . a detailed optical design of the tunable interferometer is presented and requirements for the optical elements are derived from robustness evaluations .
|
for the study of ultra - high energy particles from the cosmos the measurement of the radio emission from secondary particle showers generated in air or dense media is evolving as a new technique .first measurements of the radio emission of cosmic ray air showers had been done already in the 1960s , but with the analog electronics available at that time , the technique could not be competitive with traditional methods like the detection of secondary particles on ground or the measurement of fluorescence light emitted by air showers .recently , the radio detection method experienced a revival because of the availability of fast digital electronics .pioneering experiments like lopes and codalema have proven that radio detection of cosmic ray air showers is possible with modern , digital antenna arrays . due to the short duration of typically less than of the air shower induced radio pulse ,the experimental procedures are significantly different from those of classical radio astronomy .the main goal of the investigations is the detailed understanding of the shower radio emission and the correlation of the measured field strengths with the primary cosmic ray characteristics .the sensitivity of the measurements to the direction of the shower axis , the energy and mass of the primary particle are of particular interest .radio antenna arrays can derive the energy of the primary particle by measuring the amplitude of the field strength , and reconstruct the direction of the incoming primary particle by measuring pulse arrival times - with the remarkable difference to other distributed sensor networks , that with lopes , the arrival direction is reconstructed using digital interferometry which demands a precise time calibration .another goal is the optimization of the hardware ( antenna design and electronics ) for a large scale application of the detection technique including a self - trigger mechanism for stand - alone radio operation .lopes was built as a prototype station of the astronomical radio telescope lofar aiming to investigate the new detection method in detail .lopes is a phased array of radio antennas . featuring a precise time calibration ,it can be used for interferometric measurements , e.g. when forming a cross - correlation beam into the air shower direction .thus , lopes is sensitive to the coherence of the radio signal emitted by air showers , allowing to perform measurements even at low signal - to - noise ratios in individual antennas .this paper describes methods for the calibration and continuous monitoring of the timing of a radio antenna array like lopes and shows that it is possible to achieve a timing accuracy in the order of by combining these methods for such kind of arrays . beside the measurement and correction of group delays and frequency dependent dispersion of the setup ,we use a transmitting reference antenna , a beacon , which continuously emits sine waves at known frequencies . this way , variations of the relative delays between the antennas can be detected and corrected for in the subsequent analysis of each recorded event by measuring the relative phases at the beacon frequencies .this is different from the time calibration in other experiments , like antares , anita and aura which determine the arrival times of pulses emitted by a beacon .in addition , aura has the capability to measure frequency shifts of constant waves for calibration .the use of phase differences of a continuously emitting beacons is reported for ionospheric tec measurements , where the measurement of phases of a beacon signal is used for atmospheric monitoring , not for time calibration . where the individual methods described in this work are more or less standard in sensor based experiments , their combination to achieve the possibility of interferometric measurements is new and applied for the first time in lopes .the main component of lopes consists of 30 amplitude calibrated , inverted v - shape dipole antennas .the antennas are placed in co - location with the particle air shower experiment kascade - grande ( fig .[ fig_map ] ) .kascade - grande consists mainly of stations equipped with scintillation detectors on an area of , where 252 stations compose the kascade array , and further 37 large stations the grande array . besides the 30 lofar - type antennas , lopes consists also of newly designed antennas forming the lopes^star^ array .the main purpose of lopes^star^ is to optimize the hardware for an application of this measuring technique to large scales , e.g. at the pierre auger observatory .all antennas are optimized to measure in the range of to which is less polluted by strong interference than , e.g. the fm band .the positions of the antennas have been determined by differential gps measurements with a relative accuracy of a few cm . whenever kascade - grande measures a high - energy event , a trigger signal is send to lopes which then stores the digitally recorded radio signal as a trace of with a sampling frequency of , where the trigger time is roughly in the middle of the trace . as a band - pass filteris used to restrict the frequency band to to , lopes is operating in the second nyquist domain and contains the complete information of the radio signal within this frequency band .recovery of the full information is possible by an up - sampling procedure , i.e. the correct interpolation between the sampled data points which is done by a zero - padding algorithm . this way ,sample spacings of can be obtained within reasonable computing time , which is considerably smaller than the uncertainties of the timing introduced by other sources ( see below ) .thus , the sampling rate does not contribute significantly to systematic uncertainties .more details of the experimental set - up , the amplitude calibration , the operation , and the analysis procedures of lopes can be found in references , e.g. . ):the x - axis shows the width of the gaussian distribution of the additional timing uncertainty added to each antenna .the error bars are the rms of 100 repetitions which have been performed for each uncertainty . ]the angular resolution , respectively source location , of lopes is limited to about due to the uncertainties of the emission mechanism of the radio pulse , and thus , by the uncertainties in the shape of the wave front of the radio emission .consequently , for lopes , improving the accuracy of the time calibration to about is not expected to significantly improve the angular resolution . instead , this good timing resolution is a necessary requirement to enable the use of lopes as a digital radio interferometer .hence , this is the most important among several reasons why a precise time calibration with a relative accuracy in the order of or below is desirable for a radio air shower array : * interferometry : a timing precision which is at least an order of magnitude better than the period of the filter ringing ( for lopes ) allows one to perform interferometric measurements if the baselines of the interferometer are adequate for the angular scale of the observed source .as the distance of the source of radio emission from cosmic ray air showers to the lopes antenna array ( several km ) is much larger than the extension of the source region and the lateral extension of the array ( m ) , the angular extension of the source is small .hence , one expects that every antenna detects the same radio pulse just at a different time .thus , lopes should see coherent radio signals from air showers on the ground , which has been experimentally verified , and can be expemplarily seen in figure [ fig_exampleevent ] .this coherence is measurable , e.g. , by forming a cross - correlation beam into the air shower direction , and can be used to distinguish between noise ( e.g. thermal noise and noise originating from the kascade particle detectors ) and air shower signals .+ the requirement of a timing precision in the order of for the interferometric cross - correlation beam analysis , can be quantitatively verified by adding an additional and random timing uncertainty to each antenna , and studying the influence on the reconstructed cross - correlation beam which is a measure for the coherence .this has been done for the example event ( figure [ fig_exampleevent ] ) by shifting the traces of each antenna by an additional time taken from a gaussian random distribution ( see figure [ fig_additionaluncertainty ] ) .the height of the cross - correlation beam decreases significantly when the added uncertainty is larger than . for uncertainties the heightis not reduced further , as the analysis always finds a random correlation between some antennas .as most of the lopes events are closer to the noise than the shown example , reconstructing the cross - correlation beam correctly is important , because a reduced height can lead to a signal - to - noise ratio below the detection threshold .* polarization studies : different models for the radio emission of air showers can , among others , also be tested by their predictions on the polarization of the radio signal ( e.g. , the geo - synchrotron model predicts predominantly linear polarization of the electric field in a direction depending on the geometry of the air shower ) .the capability of any antenna array to reconstruct the time dependence of the polarization vector at each antenna position , and thus , to distinguish between linearly and circularly polarized signals , depends strongly on the relative timing accuracy between the different polarization channels of each antenna . * lateral distribution of arrival times : according to simulations , the lateral distribution of the pulse arrival times should contain information about the mass of the primary cosmic ray particle . only a precise relative timing , even between distant antennas ( m for lopes ) , can enable us to reveal this information , and to measure the shape of the radio wave front in detail .as stable clocks for the daq electronics and the trigger signal of lopes are distributed via cables , the time calibration is basically reduced to the measurement of the electronics and cable group delays , their dependence on the frequency ( dispersion ) , and their variations with time .originally , the delays were measured with the radio emission from solar burst events , and their variations were monitored by measuring the phase of the carriers of a television transmitter .meanwhile , we have developed new methods for the time calibration which do not depend on external sources out of our control .namely , we measure the delays with a reference pulse emitted at a known time , correct for the dispersion of the analog electronics and have set up an emitting antenna ( beacon ) which continuously transmits two narrow band reference signals to monitor variations of the delays with time .these three methods for calibration and monitoring of the timing are combined to achieve a timing accuracy in the order of for each event measured with lopes .nevertheless , these methods are in principle independent from each other , and for other experiments one might , e.g. , determine delays by another method , but still use the beacon method to continuously monitor the relative timing .for lopes , as a digital radio interferometer , mainly the relative timing between the different antennas is of importance , and the absolute event time has to be known only roughly to combine the lopes events with the corresponding kascade - grande events .thus , the determination of the pulse arrival times at each antenna , and therefore the measurement of the delays , is most important on a relative basis .hereby , the delay of each channel ( antenna and its analog electronics ) is different , e.g. , because different cable lengths are used .we define the absolute delay of a channel as the time between the arrival time of a radio pulse at an antenna and the time when it appears in the digitally measured trace : .the more important relative delay between two antennas and is the difference between the absolute delays of these antennas : .using solar bursts all relative delays could be determined directly . measuring the delays with respect to a common reference time is equivalent if the difference is the same for all antennas .these delays measured with respect to are related to the absolute delays by , and the relative delays can be easily derived from the measured delays by . for each antenna the measurement of the delay is performed as follows : we disconnect the cable from the antenna and connect it to a pulse generator instead , which emits a short calibration pulse at a fixed time after a normal kascade - grande trigger ( fig .[ fig_delaysetup ] ) . as reference time define the zero point of the lopes trace ( i.e. ) , which is determined by the kascade - grande trigger , because it starts the lopes read out . as it simultaneously triggers the pulse generator of the delay measurement , the condition is fulfilled , and the delay can by obtained as the arrival time of the calibration pulse in the trace of the calibration event : .v and is fed directly into the antenna cables .the relative delay is mainly caused by different cable lengths . ]this pulse arrival time is determined in a subsequent analysis as time of the positive maximum of the up - sampled trace ( like shown in fig .[ fig_delayexample ] ) .when repeating the measurement for the same channel several times , the measured pulse arrival time is stable within about one sample of the up - sampled trace ( rms of successive events , trace up - sampled to sample spacing ) , if the amplitude of the calibration pulse is chosen high enough for a sufficient signal - to - noise ratio .hence , this measurement method enables us to determine the relative delays with a statistical error of about .furthermore , systematic errors of the delay measurements have been studied in several ways , e.g. , by repeating the measurements .measurements of the relative delays performed on two consecutive days deviate by from each other ( mean and standard deviation of 10 measurements ) . as another check for systematic effects , the pulse arrival time been determined in four different ways , namely as time of the positive maximum of the trace , the negative maximum of the trace , the maximum of a hilbert envelope of the trace , and the crossing of half height of a hilbert envelope of the trace .the statistical error of the relative delays is about the same for each method ( ) .but the value of the relative delays depends on the way the pulse arrival times are calculated . only the relative delays calculated by the positive and negative maximum of the trace agree within the statistical error of about .the relative delays calculated by the maximum of the envelope and the crossing of half height of the envelope disagree slightly with each other , and the delays calculated by the positive or negative maximum of the trace are highly inconsistent with the delays calculated by the maximum of the envelope , as they all have a statistical error of about , but differ by up to a few nanoseconds . calculated by the negative maximum of the up - sampled trace and the maximum of the envelope , mean shifted to , standard deviation .the histogram contains 30 deviations of one delay measurement campaign of all 30 lopes antennas . ] under the assumption that the electronics of all channels behaves identically , all methods for the determination of the pulse arrival times should lead to exactly the same relative delays .hence , the explanation for the observed inconsistency is that the properties of the different channels are not exactly the same . indeed , after correction for all measured differences , namely the amplification factor and the dispersion ( see next section ) , the inconsistency between the delays obtained from the different methods is reduced .but still , there remains a deviation of up to a few nanoseconds for some channels , and the average deviation between the relative delays calculated by the maxima of the trace and by the envelope of the trace is of about ( see fig .[ fig_deviationshistogram ] ) .this shows the difficulty to fully correct for different channel properties . or in other words , in designing the electronics for a new radio antenna array one has to pay attention that components are from the same batches , etc . in the standard analysis of the shower reconstruction a cross - correlation beamis formed using the trace and not its envelope .therefore , we have decided to use the delays calculated by the time of the positive or negative maximum of the trace of the calibration pulse .thus , we minimize the systematic uncertainties introduced by the effect mentioned above . butstill , there is another source of systematic uncertainty : the distance of the positive and negative maximum of the trace is about because the response of the bandpass filter causes an oscillation with the center frequency of the used band .this oscillation and , thus , the distance between positive and negative maximum and the resulting relative delays depend only little ( ) on the shape of the calibration pulse .this could translate into a systematic uncertainty in the same order when determining pulse arrival times , if the pulse shape of cosmic ray radio pulses changes with lateral distance , as it is predicted by simulations .another check for systematic errors was to shift the emission time of the calibration pulse by integer and non - integer multiples of the sampling clock , and no effect on the relative delays has been observed .this proves that up - sampling works reliably , and that the determination of the arrival time of radio pulses does not depend on how these pulses arrive relative to the original sampling clock .consequently , neither up - sampling nor the original sampling rate of do introduce any significant systematic errors . summarizing , the total error on the relative delays is below for the standard cross - correlation beam analysis , which is more than sufficient for interferometric measurements with lopes . for other analysis methods , like a lateral distribution of pulse arrival times, the total error will be higher , due to the inconsistency of the different ways of calculating the pulse arrival time .in such a case the uncertainty is estimated to be in the order of .the relative delays obtained by the described method are consistent with those determined earlier by solar burst measurements .the new method , however , has two fundamental advantages compared to using astronomical sources : the resulting delays do not contain any systematic uncertainty related to the errors of the measurement of the antenna positions , and the delay calibration can be done at any time . for lopes ,this is especially important because , due to the high noise level in karlsruhe , solar bursts are the only astronomical source visible , and thus , continuously emitting astronomical sources are not available for calibration .the described method for delay measurement is repeated roughly once per year or whenever any changes in the experimental setup require it . 0.13dispersion is the frequency dependence of the group velocity , respectively of the group delay of a system . in case of dispersion ,waves at different frequencies propagate with different speeds , leading to a linear distortion of broad band radio pulses . for lopesthe dispersion of the analog electronics ( which is mainly caused by the band - pass filters ) has been measured with a network vector analyzer .hence , the dispersion of the filter can be removed in the subsequent analysis by multiplying the appropriate phase corrections to the frequency spectrum of any recorded data .the effect of the dispersion has been studied with test pulses from a pulse generator which has been connected to the analog electronics instead of the antenna , like for the delay measurements .different shapes of test pulses have been examined , and one example is shown in figure [ fig_dispersion ] before ( left ) and after correction for the dispersion ( right ) . formost pulse shapes the dispersion leads to a change in amplitude and fwhm of an hilbert envelope of the up - sampled field strength trace of about % . as the influence of the filter dispersion is largest close to the edges of the frequency band , the mentioned distortion effects can be reduced from about ten to a few percent , when using the sub - band from to , only . for radio experiments with unknown dispersion such a selection of an inner sub - band would be a possibility to reduce systematic uncertainties originating from pulse distortions .because the radio pulses from real cosmic ray events are similar to the used test pulses ( at least within the used frequency band ) , distortion effects in the same order of magnitude are expected for real events ( i.e. changes of a few percent of amplitude and fwhm ) .in addition the pulse arrival time changes by up to a few nanoseconds , depending on how it is calculated ( e.g. , value at pulse maximum or at the crossing of half height ) .these are changes of the absolute value which have a similar effect for all channels , as equal electronics is used , and thus , the dispersion of each channel is approximately the same , which has been verified by measurement . under the assumption that the cosmic ray radio pulse shape does not change much on the lateral extension of lopes ( m ), it should be distorted by every antenna and its corresponding electronics in the same way .this means that the impact of the dispersion on the relative timing is expected to be much smaller than the observed absolute shifts of a few nanoseconds .consequently , the dispersion of lopes , even if not totally corrected for , should not spoil the capability to achieve a relative timing accuracy of about .as all lopes antennas are from the same type , their dispersion is expected to affect the relative timing between the individual antennas only marginally . by this, it is acceptable that the dispersion of the lopes antenna type is not known .it is difficult to measure , because the lopes antenna can be used as receiver , only , and thus the two antenna method which is normally used for the determination of the dispersion , can not be applied . in figure[ fig_exampleevent ] , the traces of a real cosmic ray event are corrected for the dispersion of the filters , and the remaining pulse distortion seems to be smaller than those shown in figure [ fig_dispersion ] , where the calibration pulse is affected by the dispersion of the filters , only . thus , the sum of the dispersion of all other components , including the antenna , is assumed to be lower than the dispersion of the filters .nevertheless , due to the high noise level for real events , and because the exact shape of the cosmic ray radio pulses is unknown , this can not be expressed quantitatively . for lopes^star^ which uses different antennas and electronics ,the dispersion of the complete system has been measured .it was found , that the dispersion of the cables can be neglected , but the dispersion of the antenna itself can not .it can be of the same order of magnitude as the filter dispersion .for this reason , future radio experiments should aim either for antennas with low dispersion or for antennas with well - known and , thus , correctable dispersion .correcting pulse distortions induced by the antenna dispersion is especially important for larger - scale antenna arrays , if it turns out that the cosmic ray radio pulse shape changes with lateral distance .this could also have implications for the application of interferometric analysis methods , e.g. , forming a cross - correlation beam .hence , larger antenna arrays , like aera at the pierre auger observatory or lofar , have the opportunity to test this .experience with lopes has shown that the timing is not absolutely stable . instead ,once in a while , jumps by one or two clock cycles ( ) occur .in addition , small drifts or changes of the relative delays , e.g. , with changing environmental temperature , can not be excluded , as the electronics has not been designed for sub - nanosecond stability .independent of the reasons , any changes of the timing have to be accounted for , to achieve an overall timing accuracy in the order of .as the exact variations of the delay are not predictable , a continuous monitoring of the timing is needed which provides the ability to correct the timing in the subsequent analysis on an event - by - event basis . for this monitoringwe have deployed an emitting dipole antenna , a beacon , on top of a building of the karlsruhe institute of technology , at about m distance to the center of lopes ( figs .[ foto ] and [ fig_map ] ) .this beacon permanently transmits two sine waves at constant frequencies of and ( width ) at a low power of ( ) .thus , every lopes event contains a measurement of the phases at these frequencies , which can be obtained by a fourier transform into the frequency domain .any variation in the relative timing between two antennas can be detected as a variation of phase differences at each beacon frequency .the phase of the continuous beacon signal at an antenna depends on the distance and the orientation angle of the antenna towards the beacon as well as on the delay of the corresponding channel .let us assume for a moment , that there are two antennas at an equal distance and angle to the beacon and with an equal delay .if we consider just one beacon frequency , e.g. , the two antennas would measure the same phase at this frequency ( except for small deviations due to noise ) .thus , a variation in the relative delay between the two antennas would immediately lead to a change of the measured phases .if , e.g. , the relative delay shifts by , the difference between the measured phases at the two antennas would be . correspondingly , a measured phase difference can be converted in a shift of the relative delay .now let us consider the more realistic case , that we have two antennas with different angle and distance towards the beacon and different electronics and cable delays .as the distance and the effect of the antenna orientation is not precisely known ( because there is no need ) , we expect to measure a different phase at both antennas at the beacon frequency .as long as neither the distance , nor the orientation , nor the relative delay do change , the difference between the phases measured at both antennas would be arbitrary , but constant . thus again , changes of the relative delay can be detected as changes in the phase difference .the only difference to the case above is , that these changes of the phase differences will happen not with respect to , but with respect to .the important point is to define for each antenna with respect to a fixed antenna as reference ( arbitrary choice ) and each beacon frequency when the delay is exactly known .therefore , we determine as an average of the events taken at the time when we do the delay calibration described in section [ sec_delay ] .this way we can monitor and subsequently correct any variation in the timing back to the values obtained in the delay calibration . the limitation of the accuracy of the measurement of the phase differences is given by the noise and by systematic effects .the noise of the phase measurement depends ( within reasonable limits ) on the signal - to - noise ratio of the beacon signal , where the amplitude of the beacon emission can be chosen such that a sufficient accuracy is achieved . in case of lopeswe have chosen to emit each frequency at .the noise of the phase measurement has been determined by the jitter of the phase differences in successive events and corresponds to an accuracy in the order of . aside from that , the additional noise introduced by the beacon signal to the data is negligible , as the cosmic ray radio pulses are broad band and extend over the entire frequency spectrum . on the other hand the beacon signal is visible only in a few fixed and defined frequency bins and can be suppressed by artificially reducing the amplitude at these bins in the data analysis or in the hardware of the trigger logic if a radio self - trigger system is applied .ns ( ) between summer and winter .details see text . ] [ cols="<,^,^,^,^",options="header " , ] in figure [ fig_phasediffs ] , the phase differences at both beacon frequencies between two lopes antennas are shown for the first ten events of each day for one year .an annual drift of the phase differences which corresponds to about ( ) can be seen consistently at both frequencies .the reason for this annual drift is not definitely known , but might be due to environmental effects , in particular changing temperature , as the effect is largest in summer and winter . also a jump in the timing of two clock cycles ( ) is visible which occurs during one day . hereit becomes obvious , that at least two beacon frequencies are needed , as changes in the timing larger than half a period ( ) could otherwise not be detected unambiguously .a consistency check between the results at both frequencies is also necessary to identify a few noisy events ( like the outlier in the bottom left corner of figure [ fig_phasediffs ] ) , for which the beacon correction of the timing can not be performed .when inspecting the data carefully , some features in the plot of the phase differences can be seen which do not occur simultaneously at both frequencies - contrary to the visible general drift .these features are due to systematic effects and in principle decrease the achievable timing accuracy . possible reasons for systematic effects are changes in the emitted beacon signal ( e.g. , if the frequency generation is not absolutely stable ) , changes in the propagation of the signal from the beacon to the lopes antennas ( e.g. , due to different atmospheric or ground properties ) , and non - random ( e.g. , human - made ) noise at the beacon frequencies .as the scale of the observed features is significantly smaller than , they have not been investigated in detail , and should not limit the ability of the beacon method to achieve a timing accuracy of . as a cross - check, the changes of the delays between two dates roughly one year apart , have been measured with the method described in section [ sec_delay ] , and compared to the changes of the beacon phase differences between the same two dates .the relative delays measured with the method of section [ sec_delay ] changed by between the two dates ( mean and standard deviation of the absolute change of all 30 antennas ) .this itself is not unexpected as the electronics was not designed to be stable on a sub - nanosecond level .comparing these changes of the delays measured with the method of section [ sec_delay ] , with the changes observed by the beacon , reveals some systematic effects . in the ideal case , the phase differences at both beacon frequencies should change by exactly the amount corresponding to the changes of the delays . in reality , the changes observed at the two beacon frequencies are not totally equal , but the changes of the phase differences at the first beacon frequency and the second beacon frequency differ by .this is larger than the statistical error which is about ( see above ) .the changes observed at both beacon frequencies have been averaged , to check , if they are consistent with the changes of the delays measured with the method of section [ sec_delay ] : the changes determined by both methods deviate by from each other ( average of the individual deviations of all antennas ) . hence , systematic effects on the beacon signal seem to play a role , and it can not be excluded that the observed drifts of the phase differences are - at least partly - not due to drifts of the electronics or cable delays , but due to these systematic effects . nevertheless , this does not undermine the ability of the beacon method to monitor and correct real changes in the timing , like the described clock jumps , and to provide for each event a timing accuracy in the order of which is required for digital radio interferometry .finally , a beacon can not only be used to monitor the timing of an antenna array , but is valuable to check the health of the experimental setup in general .as it provides a defined reference signal visible in each event , most possible failures of the antennas or the electronics are detectable by monitoring the beacon signal .for example , we have been able to exactly find the date when we accidentally switched the cables of the two polarization channels of one antenna , by investigating the phase differences at the beacon frequencies between these channels .the methods described for the time calibration of lopes are especially useful for radio antenna arrays in a noisy environment , where the calibration with astronomical sources is not possible .they allow the determination of the electronics and cable delays with a very high precision , which can in principle be below .systematic effects , however , limit the actual achieved accuracy of the delay measurement to below for our standard , interferometric cross - correlation beam analysis and to about for the direct measurement of pulse arrival times .in addition , the dispersion of the electronics has been measured and is taken into account in the analysis of cosmic ray air shower radio pulses , to avoid systematic uncertainties in the pulse height which can be up to % .furthermore , we continuously monitor any variation of the timing with narrow band reference signals from a beacon , thus achieving an overall timing accuracy in the order of for the cross - correlation beam analysis ( see table [ tab_summary ] ) .this way the nanosecond time resolution required for digital radio interferometry is achieved for each event , and the phased antenna array lopes can be used as a digital interferometer which is sensitive to the coherence of the air shower radio emission . finally , monitoring of the timing with a beacon is an interesting feature for any radio antenna array . as in principlethe phase differences at the beacon frequencies are sensitive to any variation of the relative timing , even the timing accuracy of antenna arrays without stable clocks should be improvable to about .hence , a beacon should provide any radio experiment in the mhz regime with the capability to do interferometric measurements .for example , the application of a beacon and the possibility of interferometric measurements of the cosmic ray air shower radio pulses with larger arrays is presently investigated at the newly developed antenna array aera at the pierre auger observatory .also lofar will apply the described methods for time calibration , and observe the radio emission of cosmic ray air showers with a much denser array .the authors would like to thank the technical staff of the karlsruhe institutes for their help .sincere thanks to the entire lopes and kascade - grande collaborations for providing the working environment for these studies .
|
digital radio antenna arrays , like lopes ( lofar prototype station ) , detect high - energy cosmic rays via the radio emission from atmospheric extensive air showers . lopes is an array of dipole antennas placed within and triggered by the kascade - grande experiment on site of the karlsruhe institute of technology , germany . the antennas are digitally combined to build a radio interferometer by forming a beam into the air shower arrival direction which allows measurements even at low signal - to - noise ratios in individual antennas . this technique requires a precise time calibration . a combination of several calibration steps is used to achieve the necessary timing accuracy of about 1 ns . the group delays of the setup are measured , the frequency dependence of these delays ( dispersion ) is corrected in the subsequent data analysis , and variations of the delays with time are monitored . we use a transmitting reference antenna , a beacon , which continuously emits sine waves at known frequencies . variations of the relative delays between the antennas can be detected and corrected for at each recorded event by measuring the phases at the beacon frequencies . lopes , radio detection , cosmic ray air showers , calibration , timing 95.55.jz , 95.90.+v , 98.70.sa
|
electricity generation from solar and wind resources is volatile and has limited adaptability to changes in electricity demand . for a stable operation of the power systemthere must be a consistent balance between supply and demand .one option for achieving this balance is the use of energy storage units which can contribute substantially to the expansion of renewable energy sources ( res ) and their integration into existing grids .another possible way in which large shares of renewable energy can be integrated and electricity from res can be used efficiently , especially in times of overproduction , is to replace fossil fuels as the main energy source for heat by converting electricity into heat ( power - to - heat , pth ) resulting in a lower overall primary energy consumption and lower co emissions .if combined with thermal storage units , pth storage systems are a highly flexible option for uncoupling conversion and utilization .these heat storage units can be charged at considerably low losses during periods with high shares of excess electricity from res and provide heat in times of low feed - in . a method for storing heat energy used in germanywas implemented decades ago by night storage heating systems in private households .they operated with low overall efficiencies and subsidized electricity prices at fixed schedules . today , private households are still suitable for the deployment of pth storage systems due to the high share of primary energy demand utilized for heating and hot water supply , the overall changes in the energy system and the characteristics of today s heat storage units .if we assume that the electricity price represents the fluctuating availability of energy , these heat storage systems need to be operated in a cost - optimal manner .the optimality of a charging strategy for a pth storage unit is determined by the overall electricity acquisition costs .the task of identifying optimal strategies is closely related to the field of mathematical optimization and can be described as a minimization problem .standard solvers can be applied to calculate solutions , albeit with tremendously high overheads in computational time .hence , the investigation of complex problems may not be possible within a reasonable time frame .the specific structure of the minimization problem , based on a mathematical model of the storage units , allows the development of a new optimization method which to our knowledge has not yet been implemented .this work introduces an innovative optimization algorithm and presents proof of optimality .the solution of storage - related optimization problems can now be calculated in a fraction of the time required by standard algorithms . as a particular example, we focus on pth storage units installed in private households . for these systems ,cost - optimal operating strategies including storage units with thermal energy losses are described and used for an iterative determination of optimal system designs . beyond this particular field of application, we illustrate the potential benefits of a problem - specific optimization approach in terms of mathematical simplicity and computational gain as compared with standard solvers .the construction of mathematical energy storage models and the formulation of the corresponding optimization problems , with and without constant as well as non - constant energy losses , are described without units in the following paragraphs .we investigate an electrical charging system with an attached storage unit as a buffer and consider the parameters of electric power consumption and storage capacity .the system needs to be connected to a virtual electricity grid which provides an altering price signal reflecting the availability of electricity .a further precondition is that the energy demand is covered exclusively by the system .now , we formulate the mathematical model , without energy losses of the storage unit over time or losses during the conversion process .we discretize the considered period of time into intervals of equal size .the electricity prices are denoted by and the energy demands are represented by .furthermore , we define the amount of energy used to charge the storage unit for each time - interval by and assume that the prices and the demands are known . for technical reasons , the values of are constrained. each charge value can not be negative ( no discharge to the electricity grid or re - electrification ) and has an upper bound , which implies : to cover at least the energy demand for each time - interval , additional constraints are : the storage level for each time - interval is defined as the difference between the quantity of charges and the quantity of demands up to this time - interval . due to design - related restrictions , the storage level is bounded above by a maximum storage capacity value .hence , we have : in order to minimize the total costs to cover at least the energy demand over a period of time of size , we have to solve the following optimization problem : this problem is a special instance of a so - called _ linear program _ and can be solved using standard algorithms for general linear programs , such as the simplex method or interior point methods .the problem is also a special case of the problem which is described in [ app : algorithm ] .contrary to the methods mentioned above , we utilize the special structure of and hence also of the related problem to develop a new algorithm .the basic idea of the new algorithm is to charge the storage unit during periods when the acquisition prices are low in order to avoid further purchases at times when the prices are higher .in addition to the price levels , the algorithm also takes into account the demand , the storage level and the maximum charge power for each time - interval .therefore , the storage units are charged as much as possible at times of negative acquisition costs and as much as required , if the price is non - negative . the new algorithm to solve the problem is discussed in detail in [ app : algorithm ] and also the pseudo - code is presented ( see p. ) .below , we present the pseudo - code of the new algorithm exemplary fitted to the problem , if we set and for all .if we further define the permutation as described in [ app : algorithm ] , corresponding to the increasing prices by , the pseudo - code of the new algorithm fitted to the problem is given by : the optimality of the new algorithm on page is proven ( in [ app : algorithm ] ) and is one key element of this work . furthermore , the source code of an implementation in python is included in [ app : python ] .this method solves problems even for large in a fraction of time required by standard solvers , because at most floating point operations and comparisons to compute a solution ( see : [ app : algorithm ] ) are required . to demonstrate the efficiency of the new algorithm, we compare the runtimes between the algorithm and the common solver _ linprog _ in the following . for this comparison , a straightforward implementation of the new algorithm in python ( cf .[ app : python ] ) and the _ linprog _ implementation as available in matlab 2015b were used .the calculations were performed on a desktop computer core i7 - 930 processor ( 2.80 ghz ) , 12 gb ram , matlab 2015b , python 2.7 .] , based on input data for the problem as described in section [ subsec : subsec1 ] .the results are presented in table [ table : runtime ] and emphasize the efficiency of the new algorithm and its beneficial scaling with the size of the problem .this efficiency among others allows to perform detailed sensitivity analyses and solve highly complex optimization problems , where problem has to be solved thousands of times as a sub - problem ( cf .section [ sec : sec3 ] ) .for instance , the calculations performed in section [ subsec : subsec2 ] would have taken more than 60 days by using the solver _ linprog _ compared with about 25 minutes for the newly proposed algorithm ..calculation times of the new algorithm compared with _ linprog _ ( matlab ) of the same problem , with respect to the problem size .in addition to the general efficiency of the new algorithm , its beneficial scaling behavior ( runtime increases by a factor if the problem size is increased from to , whereas the scaling factor for _ linprog _ is about 1300 ) becomes evident .[ cols="<,^,^,^,^,^",options="header " , ]/usr / bin / python # -*- coding : utf-8 -*- # optimization algorithm to solve the following constrained # linear program : # min c*x , # subject to : 0< = x_i < = u_i # and a_i < = x_1+ ...+x_i < = b_i , for all i=1, ... ,n .# this code evolved in connection with the article # " cost - optimal operation of energy storage units : # benefits of a problem - specific approach " # = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = def algorithm(a , b , c , u ) : sigma = numpy.argsort(c ) dim = numpy.size(c ) x = numpy.zeros(dim ) for k in range(0,dim ) : i = sigma[k ] m_1=numpy.max(numpy.insert(a[:i],0,0 ) ) m_2=numpy.max(numpy.insert(a[i:],0,0 ) ) m = numpy.min(b[i : ] ) if c[i]<0 : x[i]=numpy.min([u[i],m - m_1 ] ) else : x[i]=numpy.min([numpy.max([0,m_2-m_1]),\ numpy.min([u[i],m-m_1 ] ) ] ) for l in range(i , dim ) : a[l]=a[l]-x[i ] b[l]=b[l]-x[i ] return x ....
|
the integration of large shares of electricity produced by non - dispatchable renewable energy sources ( res ) leads to an increasingly volatile energy generation side , with temporary local overproduction . the application of energy storage units has the potential to use this excess electricity from res efficiently and to prevent curtailment . the objective of this work is to calculate cost - optimal charging strategies for energy storage units used as buffers . for this purpose , a new mathematical optimization method is presented that is applicable to general storage - related problems . due to a tremendous gain in efficiency of this method compared with standard solvers and proven optimality , calculations of complex problems as well as a high - resolution sensitivity analysis of multiple system combinations are feasible within a very short time . as an example technology , power - to - heat converters used in combination with thermal storage units are investigated in detail and optimal system configurations , including storage units with and without energy losses , are calculated and evaluated . the benefits of a problem - specific approach are demonstrated by the mathematical simplicity of our approach as well as the general applicability of the proposed method . energy storage , power - to - heat , control strategies , optimization , modelling
|
in order to make our description of the background more concise we first introduce two assumptions , and formulate the forward and inverse problems . _ assumption 1 : _ let be a bounded domain with a smooth boundary . also , let be a strictly positive hermitian matrix on with entries in and be either a real valued positive or complex valued function with positive real and imaginary parts on .further , let be a positive function on if is a real valued function and complex valued function on with non - positive imaginary part if is a complex valued function .in addition , if and have any discontinuities in their derivatives , we assume that they are away from .let and , where is the sobolev space of fractional differential order in the sense .now for given dirichlet data , we consider the following boundary value problem : +\omega^2\rho(x)u(x)=0 & \mbox{in } \omega,\\ u(x)=g(x ) & \mbox{on } \partial \omega . \end{array } \right.\ ] ] _ assumption 2 : _ when and are real valued function , we assume that is non - vibrating , that is with only admits a trivial solution .+ with the above given conditions in assumptions 1 and 2 on and , it is well known that there exists a unique solution to , where denotes the sobolev space of differential order in the sense . +_ the inverse problem : _ we consider the following problem . given the _interior measurement _ , the dirichlet boundary condition on , the coefficients on , identify in .+ in this paper we will mainly consider the above inverse problem whose goal is to identify .however at the end of the paper we will address a somewhat easier alternate inverse problem where we assume all of the conditions above except that is assumed known and our goal then will be to identify .natural questions , for both of our main inverse problem and the alternate inverse problem , are the uniqueness , stability and reconstruction of ( or ) from the measured data . hereour main result is a stability result for given ( or given ) .there are two major backgrounds for this inverse problems .the one is coming from a newly developed imaging modality called mre ( magnetic resonance elastography ) ( ) .it produces movies of shear waves induced by a single frequency excitation .once a mathematical model of these waves is determined , analysis and mathematical algorithms can be developed , based on the mathematical model .the numerical implementation of the algorithms produces diagnostically useful images with the goal of adding to noninvasive medical diagnostic capabilities . in ( ) , e.g. , the advance , through mre , to produce early diagnosis of liver stiffness and fibrosis is described .mre is in the general class of hybrid or coupled physics imaging technologies where two physical principles , here mri and elastic wave propagation are combined to obtain a richer data set from which to obtain diagnostically rich images .the above boundary value problem with equal to the identity matrix and is the simplest pde ( partial differential equation ) model which is used to describe a component of a shear wave inside human tissue . in, is the angular frequency of the time harmonic vibration , , applied to a human body where is time .when is equal to the identity matrix , and is real the functions , and are the density , storage modulus and loss modulus of human tissue .the other is coming from hydrology .this corresponds to the case , and is a real valued function and , then this inverse problem has been studied in hydrology .next we discuss some known results . in the two dimensional case alessandrini ( ) gave a hlder stability result for identifying by analyzing the critical set , of , , the solution to the forward problem .further , if the zero on the right hand side of the partial differential equation in is replaced by a positive , hlder continuous function , , on , richter ( ) gave a lipschitz stability result , in any dimension , for identifying by showing the non - degeneracy of and using the maximum principle . it should be noticed that the assumption , on the positivity of , is a very strong assumption and is only a sufficient condition to guarantee the non - degeneracy of .this assumption replaces the need to analyze the critical set .a number of other results have been established where the analysis of the critical set for a single data set is avoided by making additional hypotheses .marching and elliptic algorithms for recovering an unknown real coefficient , , when is real and known in the interior , , is given in the interior , and it is propagating there to one fixed direction which means that derivative of u in this direction does not vanish there are presented in ( ) .another method for addressing the problem of recovering , when data sets can have critical points , is the use of multiple measurements whose input can be controlled .if we can have multiple measurements and control the input , then the reconstruction of was first given by nakamura - jiang - nagayasu - cheng ( ) using complex geometric optic solutions and linking them to the input data by solving a cauchy problem .in that paper , the regularity assumptions on , are just , . when it can be assumed that multiple measurements are given , a more systematic analysis , for a wide class of hybrid inverse problems , was recently done by bal - uhlmann ( , ) . in this workthe given mathematical model is linearized and then : ( 1 ) a reconstruction scheme for identifying all the coefficients and of the linearized operator is presented ; and ( 2 ) a lipschitz stability result is given when the regularity assumptions on and are just hlder continuous on . in the paper presented here , we are concerned with extending alessandrini s result to the higher dimensional case when is a complex valued function and . even in the two dimensional case and if is a real valued function , the cases and are quite different .one can see that alessandrini s argument breaks down for the case .as far as we know , there is not any stability result known for the case where is complex valued , where can have discontinuous second derivatives and only one measurement , , as opposed to multiple measurements , is given .we will show local hlder stability of our inverse problem identifying for the case is a complex valued function on and _ piecewise analytic _ in by assuming that is continuous on and piecewise analytic in and is positive hermitian and analytic .this third assumption is given more precisely as follows .we denote by ( resp . ) the set of complex valued functions on which are analytic ( resp .piecewise analytic ) in and now clarify our definition of piecewise analytic and further assumptions on . + ( piecewise analytic ) a function is piecewise analytic in if there exists a compact subset in consisting of a finite disjoint union of closed smooth analytic hypersurfaces such that is analytic in and is locally extendable as an analytic function from one side of across to the other side .that is , for any , we can find an open neighborhood of with consisting of two connected components and for which in ( resp . ) analytically extends to ._ assumption 3 : _ let , , and the locations of the singularities of and be the same .here means that all the entries of matrix belong to . with _ assumptions 1,2 , and 3_ it can be shown by using the theories of analytic pseudo - differential operators , the theories for coercive boundary value problems and the fact that when and are real we assume that we have a nonvibrating problem , that the unique solution to belongs to .this follows because the interior transmission problem can be transformed to a coercive boundary value problem for a system of equations by introducing the boundary normal coordinates in the neighborhood of ( see definition above ) so that we can reflect the component to the other side of where we have the component , and then apply the analytic hypo - ellipticity result given in chapters iii and v of to this coercive boundary value problem . now let be a sufficiently small constant .furthermore , we introduce the notion of an admissible pair for functions on .+ ( admissible pair ) a pair of functions on is said to be admissible if there exist exceptional angles with such that , for any , here denotes the -dimensional hausdorff measure .clearly the fact that follows from definition .in addition , we make the following remarks ._ remark : _ ( i ) here we give a sufficient condition in order that is an admissible pair : suppose there exists a non - negative constant satisfying then the pair becomes admissible .note also , as a particular case , real valued are always admissible .furthermore are always admissible if .( ii ) in addition to the lower bound on , given in the definition of admissible pair , we can also assume without loosing its generality that . to see thissuppose that are the exceptional angles of the admissible pair for which ( [ eq : def - addmissible ] ) holds .we will show that if , there exists a with so that can be eliminated .suppose that no such exists for some .then holds for every where we set for convenience .clearly we have which contradicts the fact . in order to state our main result , it is convenient to use the sobolev space of differential order in the sense with the norm .we also set .we denote by the solutions to the boundary value problem with and . then , we have our main result .( main theorem ) _ let and .let be an admissible pair and satisfy assumptions 1,2,3 , with and .then there exist constants and depending only on , , , and the coefficients of such that for any and of an admissible pair . furthermore ,if has an analytic smooth boundary and if is analytic in and all the coefficients of are analytic near , then we have the estimate _ ( [ eq::stability ] ) _ _ i__n which is replaced with .+ the succeeding sections are devoted to the proof of the main result and they are organized as follows .we first present a key identity and an associated estimate .then , we give : ( 1 ) statements about a tubular neighborhood of the critical set of the solution , , to , when is replaced by ; ( 2 ) the estimate of the -dimensional lebesgue measure of the tubular neighborhood ; and ( 3 ) a lower estimate of outside this tubular neighborhood .the proofs of these results are given in appendix .combining the three sets of estimates we finish proving the main result .finally we give the stability estimate for the alternate inverse problem which is to identify given , and .let and denote for a matrix . then, it is straightforward to establish the following key identity .+ ( key identity ) let assumptions 1 and 2 be satisfied with replaced by and replaced by . then, for any , that is with trace , we have here denotes a sum for .based on this key identity , we have the following fundamental estimate associated to the key identity .+ [ asso_estimate ] let assumptions 1,2 , and 3 be satisfied with replaced by and replaced by . then , there exists a constant depending only on , , and the coefficients of such that note that , as is a positive hermitian matrix , the term takes non - negative real values . before we present the proof , taking advantage of our assumptions 1 and 3 ,we first present several new sets and functions that we need for the estimate . consider the map of class defined by where is a unit conormal vector of at pointing to .then , as is compact , there exists an such that becomes a isomorphism between and an open neighborhood of .hence we have a family of relatively compact open subsets in satisfying the conditions below : 1 . and .2 . has a smooth boundary .3 . ( ) .we also have when .furthermore , using the definition of subanalytic sets given in appendix ( see also ( ) , we can find a family of relatively compact subanalytic open subsets in with .one choice for can be obtained by dividing into sufficiently small n - dimensional cubes .then can be selected to be a finite union of these cubes where each cube intersects and where the closure of the finite union is contained in .now divide the complex plane , , into proper sectors set and let be the lipschitz continuous function where and .in addition , using the definition of subanalytic functions in appendix , see also ( ) , is a subanalytic function on .furthermore it follows from the definition of that we have 1 . if and only if , 2 . if and only if , 3 . if and only if .hence , in particular , a point belongs to if and only if holds .we also define , for , ^+\wedge h),\ ] ] where ^+=\mbox{max}(m,\,0) ] * ). ] * ). ] , we have then , minimizing with respect to ] we have for some constant depending only on , , and the coefficients of . as satisfies the cone condition , by the gagliardo - nirenberg inequality, there exists a constant depending only on such that for any , we have where , and . combining and , we have where . therefore the estimate immediately follows from proposition [ asso_estimate ] .finally we show the last assertion of the theorem .since the solution becomes , in this case , piecewise analytic in an open neighborhood of and since itself is subanalytic , is a subanalytic function on and the subset is compact and subanalytic in .hence the same argument in this section can be applied to the case , and we have obtained the final estimate with . in this case the exponent on the right hand side can be left the same or also changed to with .in this section we will consider the alternate inverse problem as stated in the introduction .that is we consider the inverse problem of identifying , given , an interior measurement and the estimates , for some constant . for ,we denote by the solution to with and the constant .then as an easy application of the arguments in the previous sections , we have the following theorem .+ ( alternate inverse problem ) let assumptions 1,2 , and 3 be satisfied where is replaced by . then, there exist constants and depending only on , , , and the coefficients of such that for any and . furthermore, if has an analytic smooth boundary and if is analytic in and , are also analytic near , then we have the same estimate in which is replaced with .note that in this stability estimate , we do not have the term .we only point out new considerations that need to be taken in account in applying the arguments in the previous sections .the key identity we have to use is as follows . for any , then , by setting in the above key identity , the proof follows the same arguments as the proof of our main theorem .we briefly recall the properties of subanalytic subsets that are needed in our paper .reference is made to ( ) .let and be real analytic manifolds . in what follows ,all the manifolds are assumed to be countable at infinity .( a subanalytic subset in ) is said to be subanalytic at if there exist an open neighborhood of , real analytic compact manifolds and real analytic maps such that furthermore , is called a subanalytic subset in if is subanalytic at every point in . 1 .recall that a subset in is said to be semi -analytic if , for any point , there exists an open neighborhood of satisfying for a finite number of analytic functions on .here the binary relation is either or for each , .a semi - analytic subset ( in particular , an analytic subset ) in is subanalytic in .3 . let be a subset in .assume that , for any point in the closure of , there exists an open neighborhood of for which is subanalytic in .then is subanalytic in .4 . let be a subanalytic subset in .then its closure , its interior and its complement in are again subanalytic in .a finite union and a finite intersection of subanalytic subsets in are subanalytic in .let be a proper analytic map , that is , the inverse image of a compact subset is again compact . then , for any subanalytic subset in , the image is a subanalytic subset in .( graph of a subanalytic map ) let be a subset in , and let be a map .we say that is a subanalytic map on if the graph is a subanalytic subset in . furthermore ,if , is said to be a subanalytic function on .note that , if is a complex valued piecewise analytic function in an neighborhood of as defined in the body of this paper , then , and are subanalytic functions on .we assume in what follows .we first recall the following well - known result due to ojasiewicz ( see corollary 6.7 in ) .+ [ loja ] let be a continuous subanalytic function in an open subanalytic subset .let be the zero set of .for any compact set , there exist positive constants and satisfying for the next definition we recall here that a subset in is said to be locally closed if is a closed subset in an open subset of .let be a closed subanalytic subset in .+ ( subanalytic stratification of a closed subanalytic subset of x ) we say that a family of locally closed subsets is a subanalytic stratification of if the following conditions are satisfied . 1 . is a disjoint union of s .each is called a stratum .2 . is a connected subanalytic subset in and it is analytic smooth at each point in .if for , then holds .in particular , we have and .the family is locally finite in , that is , for any compact set in , only a finite number of strata intersect .for example , let and let us consider a closed triangle with its vertexes , and as .then has a subanalytic stratification consisting of 7-strata , the interior of the triangle , open segments , , and points , , .see figure 1 .+ our interest is in which is a compact subanalytic set with .it follows from theorem a in that there exists a subanalytic stratification where each stratum is an l - regular s - cell .definition in for the definition of an l - regular s - cell .furthermore , since is compact , the subanalytic stratum of is locally finite in implying that the index set is finite .the properties of the l - regular s - cell ( ) that we need are that it can be built up from a zero or one dimensional set , using orthogonal coordinates , a positive constant , and where the build up is through ordered pairs , , referred to as data , where and is a set of functions whose details are given below .the stratum , , is thus a kind of cylinder cell built up from a lower dimensional cell to a higher dimensional one ; see figure 2 . 1 .the set is a point or an open interval in .each is a locally closed subanalytic subset in and it is analytic smooth at each point in .the set is a set consisting of one continuous subanalytic function on or two continuous subanalytic functions and on with ( ) . furthermore , for any , is analytic on and has the estimate here denotes the differential 1-form of on and the cotangent bundle of is equipped with the metric induced from the standard one in .3 . for ,if consists of one function , then where , and otherwise we have here we set .summing up , as figure 2 shows , the l - regular s - cell is constructed successively from to using functions in , , . furthermore itself and each component of are sufficiently flat due to ( [ eq : estimate - boundary - function ] ) .our goal is to present a theorem that gives an open covering , whose measure we can estimate , of the zero level set of a subanalytic function .prior to presenting and proving this theorem we establish that we can extend a function in ( defined on ) to as a subanalytic lipschitz continuous function .we first establish the following lemma .+ let and be a compact subanalytic subset in .let be a subanalytic stratification of with each being an l - regular s - cell .further , let and be the data for .then is a lipschitz continuous function on . if we can show the lipschitz continuity of on , then the claim of the lemma follows from the continuity of on .hence it suffices to prove the claim on . since itself is an l - regular s - cell in , by 8 .proposition in , there exists a positive constant for which any points and in are joined by a smooth curve in with let and be points in , and let ( ) be such a curve in .then we have here we identified with a tangent vector of the manifold at .hence the result follows from ( [ eq : estimate - curve - lenght ] ) .we construct the family recursively . for , we set if consists of one point , otherwise we define , for ( ) , clearly the conditions are satisfied for .suppose that has been constructed .we first define by which is subanalytic and lipschitz continuous by the induction hypothesis .since and , are subanalytic and lipschitz continuous , also becomes a subanalytic and lipschitz continuous map in both cases .we set .then is a subanalytic and lipschitz continuous map as a composition of maps that have the same properties , and for clearly holds by the construction .hence we have obtained the desired family of maps .let .then is a subanalytic and lipschitz continuous function on and its restriction to coincides with .therefore , in what follows , we assume that all the functions belonging to are defined in and they are subanalytic and lipschitz continuous there for any .it follows from that there exists such that consists of only one function . in fact, otherwise , the becomes an open subset in which contradicts .let be the largest one of those s .then we define the subanalytic open subset ( ) by clearly ( ) is an open subanalytic subset and it contains . for the other , we can construct a subanalytic open neighborhood of in the same way .+ by setting with defined in the above paragraph , we have the following covering theorem .+ [ critical_set ] let be a compact subanalytic subset in with . then there exists a family of subanalytic open neighborhoods of and positive constants , for which we have the following. we will establish that has the desired properties described in the statement of the theorem .since each is subanalytic open and contains , their union becomes a subanalytic open neighborhood of .the first claim 1 . of the theoremis easily seen .in fact , we have since the number of the strata is finite , the claim follows from this .we now establish claim 2 . of the theorem .suppose that the claim were false .then there exists a sequence of positive real numbers in ] and .. then belongs to both and .this contradicts the fact that is an open neighborhood of the compact set .therefore we assume , i.e. , ( ) in what follows .let be a point in with . by taking a subsequence, we may assume ( ) for some .let ( ) denote the canonical projection defined by let be the index determined before equation ( [ eq : def - u_alpha ] ) .then we have and note that , since and consists of only one function , it follows from the construction of described above that the relation holds . sufficiently large s .since the function is lipschitz continuous , we also have for a positive constant .therefore , by ( [ eq : xxx_pi_estimate_1 ] ) and ( [ eq : xxx_lipschitz_h ] ) , we obtain summing up , by ( [ eq : xxx_pi_estimate_2 ] ) , ( [ eq : xxx_difference_larger_eta ] ) and ( [ eq : xxx_difference_p_and_q ] ) , we get from which we have this contradicts ( [ eq : contradict - conclusion ] ) if tends to , and hence , the claim 2 . must be true . the proof has been completed .[ prop : area_finite ] let be a relatively compact open subanalytic subset in and a real valued continuous subanalytic function on .suppose that there exists a subanalytic stratification of such that is analytic in and analytically extends to an open neighborhood of for any with .then there exists a finite subset of and a positive constant satisfying for any .furthermore , let be a closed subanalytic subset in with .then here .for any , we set . as we have and is a finite set , it suffices to show the corresponding claim on for each with .hence , in what follows , we assume that is analytic in and analytically extendable to an open neighborhood of . if is a constant function in , then we take , for which the claim clearly holds .therefore we may assume that is not constant and , as a result , we have for any .set and .then is a subanalytic subset in as is proper on and it is a measure - zero set by sard s theorem .hence consists of finite points in and we take it as .let be the canonical projection by excluding the coordinate .we set is the unit vector with its -th component being .note that is subanalytic in and holds .furthermore we set which is also subanalytic in .then , for any , since on , we have and is an analytic smooth hypersurface in . by these observations it suffices to show .define where denotes the number of the connected components of a set .note that these numbers certainly existbecause the direct images and of constructible sheaves and are again constructible sheaves by proposition 8.4.8 ( ) and hold for ( see also chapter viii in ( ) for the definition of a constructible sheaf ) . as is a finite map ,that is , is a proper map and consists of finite points for every , there exists a subanalytic stratification of such that becomes a finite covering over for each .note that the stratification consists of a finite number of strata .furthermore the number of connected components of is at most , which can be proved as follows : as is connected , it suffices to show that the number of points ( ) is at most .let be the line .we first assume that is connected , i.e. , . then thereexist mutually distinct points in such that , in each open interval of , is strictly increasing , strictly decreasing or constant . as intersects transversally , never intersects an interval where is constant .since the number of intervals in which is non - constant is at most and since intersects the closure of such an interval at one point if exists , we conclude that consists of at most points . by applying the same argument to each connected component of , we can prove the claim for the case .for with , since is a finite covering over , we have which implies .on the other hand , for with , we have hence we have this shows the first claim of the proposition .finally we show the last claim .clearly and hold .hence we have , in , due , for example , to the second theorem in this appendix .then the last claim of the proposition immediately follows from ( [ eq : last - vol - estimate ] ) .r. muthupillai , d. lomas , r. rossman , j. greenlead , a. manduca , and r. ehman , magnetic resonance elastography by direct visualization of propagating acoustic strain waves , science , 269 ( 1995 ) 18541857 .k. lin , j. mclaughlin , n. zhang , `` log - elastographic and non - marching full inversion schemes for shear modulus recovery from single frequency elastographic data '' , inverse problems , vol 25(7 ) , july , 2009 .
|
an inverse problem to identify unknown coefficients of a partial differential equation by a single interior measurement is considered . the equation considered in this paper is a strongly elliptic second order scalar equation which can have complex coefficients in a bounded domain with boundary and single interior measurement means that we know a given solution of the equation in this domain . the equation includes some model equations arising from acoustics , viscoelasticity and hydrology . we assume that the coefficients are piecewise analytic . our major result is the local hlder stability estimate for identifying the unknown coefficients . if the unknown coefficients is a complex coefficient in the principal part of the equation , we assumed a condition which we named admissibility assumption for the real part and imaginary part of the difference of the two complex coefficients . this admissibility assumption is automatically satisfied if the complex coefficients are real valued . for identifying either the real coefficient in the principal part or the coefficient of the 0-th order of the equation , the major result implies the global uniqueness for the identification .
|
the weibull distribution is a popular distribution widely used for analyzing lifetime data .we work with the beta weibull ( bw ) distribution because of the wide applicability of the weibull distribution and the fact that it extends some recent developed distributions .this generalization may attract wider application in reliability and biology .we derive explicit closed form expressions for the distribution function and for the moments of the bw distribution .an application is illustrated to a real data set with the hope that it will attract more applications in reliability and biology , as well as in other areas of research .the bw distribution stems from the following general class : if denotes the cumulative distribution function ( cdf ) of a random variable then a generalized class of distributions can be defined by for and , where is the incomplete beta function ratio , is the incomplete beta function , is the beta function and is the gamma function .this class of generalized distributions has been receiving increased attention over the last years , in particular after the recent works of eugene et al .( 2002 ) and jones ( 2004 ) .eugene et al . ( 2002 ) introduced what is known as the beta normal distribution by taking in ( [ betadist ] ) to be the cdf of the normal distribution with parameters and .the only properties of the beta normal distribution known are some first moments derived by eugene et al .( 2002 ) and some more general moment expressions derived by gupta and nadarajah ( 2004 ) .more recently , nadarajah and kotz ( 2004 ) were able to provide closed form expressions for the moments , the asymptotic distribution of the extreme order statistics and the estimation procedure for the beta gumbel distribution .another distribution that happens to belong to ( [ betadist ] ) is the log ( or beta logistic ) distribution , which has been around for over 20 years ( brown et al . , 2002 ) , even if it did not originate directly from ( [ betadist ] ) .while the transformation ( [ betadist ] ) is not analytically tractable in the general case , the formulae related with the bw turn out manageable ( as it is shown in the rest of this paper ) , and with the use of modern computer resources with analytic and numerical capabilities , may turn into adequate tools comprising the arsenal of applied statisticians .the current work represents an advance in the direction traced by nadarajah and kotz ( 2006 ) , contrary to their belief that some mathematical properties of the bw distribution are not tractable .thus , following ( [ betadist ] ) and replacing by the cdf of a weibull distribution with parameters and , we obtain the cdf of the bw distribution for , , , and .the corresponding probability density function ( pdf ) and the hazard rate function associated with ( [ bwdist ] ) are : ^{a-1 } , \ ] ] and ^{a-1 } } { b_{1 - { \rm exp}\{-(\lambda x)^c\}}(a , b ) } , \ ] ] respectively .simulation from ( [ bwpdf ] ) is easy : if is a random number following a beta distribution with parameters and then will follow a bw distribution with parameters and . some mathematical properties of the bw distribution are given by famoye et al .( 2005 ) and lee et al .. graphical representation of equations ( [ bwpdf ] ) and ( [ bwhaz ] ) for some choices of parameters and , for fixed and , are given in figures [ fig1 ] and [ fig2 ] , respectively .it should be noted that a single weibull distribution for the particular choice of the parameters and is here generalized by a family of curves with a variety of shapes , shown in these figures .the rest of the paper is organized as follows . in section 2 , we obtain some expansions for the cdf of the bw distribution , and point out some special cases that have been considered in the literature . in section 3 , we derive explicit closed form expressions for the moments and present skewness and kurtosis for different parameter values .section 4 gives an expansion for its moment generating function . in section 5, we discuss the maximum likelihood estimation and provide the elements of the fisher information matrix. in section 6 , an application to real data is presented , and finally , in section 7 , we provide some conclusions . in the appendix , two identities needed in section 3 are derived . the probability density function ( [ bwpdf ] ) of the bw distribution , for several values of parameters and , width=480 ] the hazard function ( [ bwhaz ] ) of the bw distribution , for several values of parameters and , width=480 ]the bw distribution is an extended model to analyze more complex data and generalizes some recent developed distributions . in particular, the bw distribution contains the exponentiated weibull distribution ( for instance , see mudholkar et al . , 1995 , mudholkar and hutson , 1996 , nassar and eissa , 2003 , nadarajah and gupta , 2005 and choudhury , 2005 ) as special cases when .the weibull distribution ( with parameters and ) is clearly a special case for . when , ( [ bwpdf ] ) follows a weibull distribution with parameters and .the beta exponential distribution ( nadarajah and kotz , 2006 ) is also a special case for . in what follows ,we provide two simple formulae for the cdf ( [ bwdist ] ) , depending on whether the parameter is real non - integer or integer , which may be used for further analytical or numerical analysis . starting from the explicit expression for the cdf ( [ bwdist ] ) ^{a-1 } dy,\ ] ] the change of variables yields is real non - integer we have it follows that finally , we obtain for positive real non - integer , the expansion ( [ bwdistexp ] ) may be used for further analytical and/or numerical studies . for integer only need to change the formula used in ( [ expreal ] ) to the binomial expansion to give when both and are integers , the relation of the incomplete beta function to the binomial expansion gives ^{j}{\rm exp}\{-(n - j)(\lambda x)^c\}.\ ] ] it can be found in the wolfram functions site that for integer and for integer , then , if is integer , we have another equivalent form for ( [ bwdistexpint ] ) ^{j}.\ ] ] for integer values of , we have ^a}{\gamma(a ) } \sum_{j=0}^{b-1}\frac{\gamma(a+j)}{j!}{\rm exp}\{-j(\lambda x)^c\}.\ ] ] finally , if and , we have the particular cases ( [ part2 ] ) and ( [ part3 ] ) were discussed generally by jones ( 2004 ) , and the expansions ( [ bwdistexp])-([part4 ] ) reduce to nadarajah and kotz s ( 2006 ) results for the beta exponential distribution by setting .clearly , the expansions for the bw density function are obtained from ( [ bwdistexp ] ) and ( [ bwdistexpint ] ) by simple differentiation .hence , the bw density function can be expressed in a mixture form of weibull density functions .let be a bw random variable following the density function ( [ bwpdf ] ) .we now derive explicit expressions for the moments of .we now introduce the following notation ( for any real and and positive ) the change of variables immediately yields .on the other hand , the change of variables gives the following relation ^{a-1 } dy = \frac{\lambda^{-\gamma}}{c } s_{\frac{\gamma}{c},b , a},\ ] ] from which it follows that or , equivalently , for any real relating to the generalized moment of the beta weibull .first , we consider the integral ( [ sdba ] ) when is an integer .let be a random variable following the beta distribution with pdf and .further , let and be the cdf s of and , respectively .it is easy to see that .further , by the properties of the lebesgue - stiltjes integral , we have thus , the values of for integer values of can be found from the moments of if they are known .however , the moment generating function ( mgf ) of can be expressed as this formula is well defined for .however , we are only interested in the limit and therefore this expression can be used for the current purpose .we can write from equations ( [ rel2 ] ) and ( [ sinteiro ] ) for any positive integer we obtain a general formula as particular cases , we can see directly from ( [ sinteiro ] ) that and ,\ ] ] and by using ( [ rel2 ] ) we find and the same results can also be obtained directly from ( [ kcmoment ] ) .note that the formula for matches the one just given after equation ( 12 ) .since the bw distribution for reduces to the beta exponential distribution , the above formulae for and reduce to the corresponding ones obtained by nadarajah and kotz ( 2006 ) .our main goal here is to give the moment of for every positive integer .in fact , in what follows we obtain the generalized moment for every real which may be used for further theoretical or numerical analysis . to this end, we need to obtain a formula for that holds for every positive real . in the appendixwe show that for any the following identity holds for positive real non - integer and that when is positive integer is satisfied .it now follows from ( [ identity ] ) and ( [ rel2 ] ) that the generalized moment of for positive real non - integer can be written as when is integer , we obtain when , follows a weibull distribution and ( [ bwmom2 ] ) becomes which is precisely the moment of a weibull distribution with parameters and .equations ( [ kcmoment ] ) , ( [ bwmom ] ) and ( [ bwmom2 ] ) represent the main results of this section , which may serve as a starting point for applications for particular cases as well as further research .+ skewness of the bw distribution as a function of parameter , for several values of parameter , width=480 ] kurtosis of the bw distribution as a function of parameter , for several values of parameter , width=480 ] skewness of the bw distribution as a function of parameter , for several values of parameter , width=480 ] kurtosis of the bw distribution as a function of parameter , for several values of parameter , width=480 ] graphical representation of skewness and kurtosis for some choices of parameter as function of parameter , and for some choices of parameter as function of parameter , for fixed and , are given in figures [ fig3 ] and [ fig4 ] , and [ fig5 ] and [ fig6 ] , respectively .it can be observed from figures 3 and 4 that the skewness and kurtosis curves cross at , and from figures 5 and 6 that both skewness and kurtosis are independent of for .in addition , it should be noted that the weibull distribution ( equivalent to bw for ) is represented by as single point on figures 3 - 6 .we can give an expansion for the mgf of the bw distribution as follows ^{a-1 } dx\\ & = & \frac{c\lambda^c}{b(a , b)}\sum_{r=0}^\infty \frac{t^r}{r ! } \int_0^\infty x^{r+c-1 } { \rm exp}\{-b(\lambda x)^c\}[1-{\rm exp}\{-(\lambda x)^c\}]^{a-1 } dx\\ & = & \frac{c\lambda^c}{b(a , b)}\sum_{r=0}^\infty \frac{t^r}{r ! } \frac{\lambda^{-(r+c)}}{c}s_{r/ c+1,b , a } , \end{aligned}\ ] ] where the last expression comes from ( [ rel1 ] ) . for positive real non - integer using ( [ identity ] ) we have and for integer using ( [ identity2 ] ) we obtain note that the expression for the mgf obtained by choudhury ( 2005 ) is a particular case of ( [ bwmgf ] ) , when , and . when , we have substituting in the above integral yields and using the definition of the beta function in ( [ bemgf ] ) we find which is precisely the expression ( 3.1 ) obtained by nadarajah and kotz ( 2006 ) .let be a random variable with the bw distribution ( [ bwpdf ] ) .the log - likelihood for a single observation of is given by .\end{aligned}\ ] ] the corresponding components of the score vector are : and the maximum likelihood equations derived by equating ( [ score3])-([score1 ] ) to zero can be solved numerically for and .we can use iterative techniques such as a newton - raphson type algorithm to obtain the estimates of these parameters .it may be worth noting from , that ( [ score4 ] ) yields which agrees with the previous calculations . for interval estimation of and hypothesis tests , the fisher information matrix is required . for expressing the elements of this matrixit is convenient to introduce an extension of the integral ( [ sdba ] ) so that we have as before , let , where is a random variable following the beta distribution , then & = & \int_{-\infty}^\infty x^{d-1 } ( \log x)^e df_w(x)\\ & = & \frac{1}{b(a , b)}\int_0^\infty x^{d-1 } e^{bx } ( 1-e^{-x})^{a-1}(\log x)^e dx\\ & = & \frac{t_{d , b , a , e}}{b(a , b)}.\end{aligned}\ ] ] hence , the equation ,\ ] ] relates to expected values . to simplify the expressions for some elements of the information matrix , it is useful to note the identities and which can be easily proved .explicit expressions for the elements of the information matrix , obtained using maple and mathematica algebraic manipulation software ( we have used both for double checking the obtained expressions ) , are given below in terms of the integrals ( [ sdba ] ) and ( [ tdbae ] ) : and the integrals and in the information matrix are easily numerically determined using maple and mathematica for any and . under conditions that are fulfilled for parameters in the interior of the parameter space but not on the boundary ,the asymptotic distribution of the maximum likelihood estimates and is multivariate normal .the estimated multivariate normal distribution can be used to construct approximate confidence intervals and confidence regions for the individual parameters and for the hazard rate and survival functions .the asymptotic normality is also useful for testing goodness of fit of the bw distribution and for comparing this distribution with some of its special sub - models using one of the three well - known asymptotically equivalent test statistics - namely , the likelihood ratio ( lr ) statistic , wald and rao score statistics .we can compute the maximum values of the unrestricted and restricted log - likelihoods to construct the lr statistics for testing some sub - models of the bw distribution .for example , we may use the lr statistic to check if the fit using the bw distribution is statistically `` superior '' to a fit using the exponentiated weibull or weibull distributions for a given data set .mudholkar et al .( 1995 ) in their discussion of the classical bus - motor - failure data , noted the curious aspect in which the larger ew distribution provides an inferior fit as compared to the smaller weibull distribution .in this section we compare the results of fitting the bw and weibull distribution to the data set studied by meeker and escobar ( 1998 , p. 383 ) , which gives the times of failure and running times for a sample of devices from a field - tracking study of a larger system . at a certain point in time , 30 units were installed in normal service conditions .two causes of failure were observed for each unit that failed : the failure caused by an accumulation of randomly occurring damage from power - line voltage spikes during electric storms and failure caused by normal product wear .the times are : 275 , 13 , 147 , 23 , 181 , 30 , 65 , 10 , 300 , 173 , 106 , 300 , 300 , 212 , 300 , 300 , 300 , 2 , 261 , 293 , 88 , 247 , 28 , 143 , 300 , 23 , 300 , 80 , 245 , 266 .the maximum likelihood estimates and the maximized log - likelihood for the bw distribution are : while the maximum likelihood estimates and the maximized log - likelihood for the weibull distribution are : the likelihood ratio statistic for testing the hypothesis ( namely , weibull versus bw distribution ) is then , which indicates that the weibull distribution should be rejected . as an alternative test we use the wald statistic .the asymptotic covariance matrix of the maximum likelihood estimates for the bw distribution , which comes from the inverse of the information matrix , is given by the resulting wald statistic is found to be , again signalizing that the bw distribution conform to the above data . in figure 7we display the pdf of both weibull and bw distributions fitted and the data set , where it is seen that the bw model captures the aparent bimodality of the data . the probability density function ( [ bwpdf ] ) of the fitted bw and weibull distributions , width=480 ]the weibull distribution , having exponential and rayleigh as special cases , is a very popular distribution for modeling lifetime data and for modeling phenomenon with monotone failure rates .in fact , the bw distribution represents a generalization of several distributions previously considered in the literature such as the exponentiated weibull distribution ( mudholkar et al . , 1995 ,mudholkar and hutson , 1996 , nassar and eissa , 2003 , nadarajah and gupta , 2005 and choudhury , 2005 ) obtained when .the weibull distribution ( with parameters and ) is also another particular case for and .when , the bw distribution reduces to a weibull distribution with parameters and .the beta exponential distribution is also an important special case for .the bw distribution provides a rather general and flexible framework for statistical analysis .it unifies several previously proposed families of distributions , therefore yielding a general overview of these families for theoretical studies , and it also provides a rather flexible mechanism for fitting a wide spectrum of real world data sets .we derive explicit expressions for the moments of the bw distribution , including an expansion for the moment generating function .these expressions are manageable and with the use of modern computer resources with analytic and numerical capabilities , may turn into adequate tools comprising the arsenal of applied statisticians .we discuss the estimation procedure by maximum likelihood and derive the information matrix .finally , we demonstrate an application to real data .in what follows , we derive the identities ( [ identity ] ) and ( [ identity2 ] ) .we start from which yields and substituting gives for real non - integer , we have also , for real and real , we have hence , and , finally , we arrive at which represents the identity ( [ identity ] ) . 20 brown , b. w. , spears , f. m. and levy , l. b. ( 2002 ) .the log : a distribution for all seasons ._ , 17 , 47 - 58 .choudhury , a. , 2005 . a simple derivation of moments of the exponentiated weibull distribution . _ metrika _ , 62 , 17 - 22 .eugene , n. , lee , c. and famoye , f. , 2002 .beta - normal distribution and its applications .statist . - theory and methods _ , 31 , 497 - 512 .famoye , f. , lee , c. and olumolade , o. , 2005 . the beta - weibull distribution . _ j. statistical theory and applications _ , 4 , 121 - 136 .gupta , r. d. and kundu , d. , 2001 .exponentiated exponential family : an alternative to gamma and weibull distributions ._ biometrical journal _ , 43 , 117 - 130 .gupta , a. k. and nadarajah , s. , 2004 . on the moments of the beta normal distribution .statist . - theory and methods _, 33 , 1 - 13 . jones , m. c. , 2004 .families of distributions arising from distributions of order statistics ._ test _ , 13 , 1 - 43 .lawless , j. f. , 1982 . _ statistical models and methods for lifetime data_. john wiley , new york .lee , c. , famoye , f. and olumolade , o. , 2007 .beta - weibull distribution : some properties and applications to censored data ._ j. modern applied statistical methods , _ 6 , 173 - 186 .linhart , h. and zucchini , w. , 1986 ._ model selection ._ john wiley , new york .meeker , w. q. and escobar , l. a. , 1998 . _ statistical methods for reliability data_. john wiley , new york .mudholkar , g. s. , srivastava , d. k. and freimer , m. , 1995 .the exponentiated weibull family ._ technometrics _ , 37 , 436 - 45 .mudholkar , g. s. and hutson , a. d. , 1996 .the exponentiated weibull family : some properties and a flood data application . _ commun . statist . -theory and methods _, 25 , 3059 - 3083 .nadarajah , s. and gupta , a. k. , 2005 . on the moments of the exponentiated weibull distribution .statist . - theory and methods _, 34 , 253 - 256 .nadarajah , s. and kotz , s. , 2004 . the beta gumbel distribution ._ , 10 , 323 - 332 .nadarajah , s. and kotz , s. , 2006 . the beta exponential distribution . _ reliability engineering and system safety _, 91 , 689 - 697 .nassar , m. m. and eissa , f. h. , 2003 . on the exponentiated weibull distribution .statist . - theory and methods _, 32 , 1317 - 1336 .
|
the beta weibull distribution was introduced by famoye et al . ( 2005 ) and studied by these authors . however , they do not give explicit expressions for the moments . we now derive explicit closed form expressions for the cumulative distribution function and for the moments of this distribution . we also give an asymptotic expansion for the moment generating function . further , we discuss maximum likelihood estimation and provide formulae for the elements of the fisher information matrix . we also demonstrate the usefulness of this distribution on a real data set . + _ keywords _ : beta weibull distribution , fisher information matrix , maximum likelihood , moment , weibull distribution .
|
the directed assembly of colloidal particles enables the design of novel soft materials with bespoke 3d architectures .the desired assembly route can be selected by adjusting the interparticle interactions .for example , the electrostatic interaction between oppositely charged particles can be tuned to obtain ionic colloidal crystals rather than irreversible aggregation .an alternative approach employs templates to guide particle assembly towards a target structure . for instance , sedimentation of microparticles onto structured solid templates has been used to direct colloidal - crystal assembly and binary crystals of nanoparticles have been grown via liquid air interfacial assembly . in both cases ,the interaction between the assembling particles and the template is crucial : pattern lattice mismatches of % already cause crystal defects and liquid subphase properties significantly affect crystal quality .a startling case of liquid templating is the formation of bicontinuous pickering emulsions , i.e. bicontinuous interfacially jammed emulsion gels or bijels ( fig .[ fig : figure_introduction](a ) ) , which have been suggested for applications in fuel cells , microfluidics and tissue engineering .bijel formation typically proceeds via spinodal demixing of a binary liquid containing colloidal particles ( fig .[ fig : figure_introduction](b ) ) , which can arrest the phase separation by forming a jammed monolayer at the liquid liquid interface . as in the cases discussed above , template particle interactions are essential : bijels are only formed if the particles are ( almost ) neutrally wetting , otherwise emulsion droplets are formed .the parameter that quantifies this interaction is the contact angle , which is a measure of the particle s position relative to the liquid interface : is neutral wetting ( fig .[ fig : figure_introduction](c ) ) .unfortunately , tuning the mean value of is non - trivial and restraining its variance is harder still , making bijel formation challenging . :final channel width .( b ) coexistence curve for the water lutidine ( w l ) system ( cp : critical point ) .vertical arrow : bijel formation , i.e. a homogeneous mixture of w l at the critical weight fraction ( x ) is heated from room temperature to ( or ) .spinodal demixing results in two phases ( a / b ) , with compositions given by the horizontal tie - lines .( c ) schematic of the contact angle ( for ).[fig : figure_introduction ] ] ostensibly , reducing particle size given a fixed final bijel - channel width ( fig . [ fig : figure_introduction](a ) ) would only make matters worse , as scaling down in a close - packed monolayer of particles with _ fixed _ requires a commensurate reduction in the local radius of curvature of the interface . in other words , for a given non - neutrality , one might expect smaller particles to locally demand a more strongly curved interface and hence be more disruptive to bicontinuity on a chosen scale .however , this ignores the particle - size dependence of the stiffness of the particle - laden liquid interface , which might specifically aid small particles in overcoming off - neutral wetting . in this paper, we experimentally explore the effect of particle size on bijel formation .we find that bijels are formed more robustly when nanoparticles rather than microparticles are used : nanospheres allow minimum heating rates two orders of magnitude slower than microspheres , with the latter stabilizing droplet emulsions rather than bijels at slow rates .we discuss our results in the context of mechanical leeway , i.e. interfacial particles that are smaller lead to a less rigid interface between the two liquid phases , resulting in a smaller driving force towards disruptive curvature .finally , we discuss the implications of leeway mechanisms in the ( directed ) self - assembly of functional formulations based on particle - template or even particle - particle interactions .for particle synthesis , tetraethyl orthosilicate ( teos , % , aldrich ) , 35% ammonia solution ( reagent grade , fisher scientific ) , ethanol absolute ( vwr chemicals ) , fluorescein isothiocynate ( fitc , 90% isomer 1 , aldrich ) and ( 3-aminopropyl)triethoxysilane ( aptes , 99% , aldrich ) were used as received . for bijel preparation ,2,6-lutidine ( % , aldrich ) and nile red ( aldrich ) were used as received ; distilled water was run through a milli - q ( millipore ) filtration system to perform deionization ( to a resistivity of at least ) . here , we formed ( bicontinuous ) pickering emulsions by spinodal demixing of the binary liquid water - lutidine , heated at various rates in the presence of colloidal particles .note that the water - lutidine ( w - l ) interfacial tension is temperature - dependent and orders of magnitude lower than that of typical water - alkane systems . according to ref . , ranges from at ( just above the lower critical solution temperature of ) to at . during slow heating at , it takes about 6 s to get from to and about 12 min to get to .the particles used in this study were synthesized using the stber method , modified to include the dye fitc via the linking molecule aptes .for the microparticles ( mps ) , a dye mixture of 0.584 g aptes , 0.107 g fitc and 4.0 ml ethanol was prepared overnight by stirring .the following day , a reaction mixture of 1.5 l ethanol , 186 ml 35% ammonia solution and 60 ml teos was prepared , and the dye mixture added .the entire reaction mixture was kept in a refrigerator for 24 hours at .this resulted in particles with a radius of as measured by dynamic light scattering ( dls ) and according to transmission electron microscopy ( tem , fig .[ fig : figure_particle_size](a ) ) . and( b ) 200 nm.[fig : figure_particle_size ] ] the nanoparticles ( nps ) were synthesized in a similar fashion to the mps , except the reaction temperature was and the concentration of dye mixture was increased to take account of the increase in surface - to - volume ratio which accompanies a decrease in particle radius .this is an important consideration , as it has been shown that the presence of aptes on the silica surface is crucial for meeting the neutral - wetting requirement in the w - l system it has been suggested that the surface decorations act to disrupt the wetting layer of lutidine which spontaneously forms around the particles when approaching the phase separation temperature . for the nps , dls returned a particle radius of and tem returned with a polydispersity of 15% ( fig .[ fig : figure_particle_size](b ) ) .we have confirmed that the nps ( ) have a lower density than the mp ( ) ( density meter , anton paar , dma 4500 ) , presumably due to the higher dye concentration by volume , which could lead to enhanced shrinkage in the vacuum of the tem .the decrease in dls particle size closely matches the increase in aptes concentration compared to the mp synthesis , so the nps and mps are expected to have identical surface chemistries . to remove excess aptes and fitc from the synthesis product ,the particles were washed by repeated centrifugation / redispersion : ethanol then water for the mps and ethanol then water for the nps .subsequently , the particles were pre - dried at room temperature in a fume hood and ground with a mortar and pestle .prior to sample preparation , particles were dried at 20 mbar and ( no more than per vial and no more than 3 vials at the same time ) .this removes surface - bound water and may cause moderate dehydroxylation of the silica surface .the drying time was tuned to optimize bijel quality as assessed by visual inspection of confocal micrographs ; dried particles were stored in a desiccator in the presence of a silica gel .first , dried particles were dispersed in deionized water by ultrasonication ( sonics vibracell ) .the mps were sonicated for minutes at 8 w with s of vortex mixing in between . to ensure proper redispersion, nps were additionally sonicated for minutes at 8 w and vortex mixed for s. lutidine was then added to give a mixture with a critical composition , i.e. a mass ratio of w : l = 72:28 ( fig . [ fig : figure_introduction](b ) ) , so that spinodal decomposition would be ( at least initially ) the preferred phase separation mechanism . to allow confocal imaging of the lutidine - rich phase , the fluorescent dye nile red had been added to the lutidine at a concentration of around ( we checked that nile red partitions into the lutidine - rich phase and that concentrations as low as gave similar bijels ) .the sample mixture was transferred to a glass cuvette ( starna 21-g-1 with pathlength 1 mm ) and placed inside a metal block , which was itself placed inside a temperature stage ( instec , tsa02i ) .emulsification via liquid - liquid demixing was initiated by heating the sample to a target temperature above the lower critical solution temperature ( lcst ) of .slow heating ( ) was achieved by programming the temperature stage to ramp the temperature at the desired rate , from room temperature ( ) to .heating rates were extracted from the -graphs produced by the stage software and we have used a thermocouple to ascertain that at these slow rates the sample temperature does not lag the stage temperature ; estimate of corresponding error in heating rate . for a heating rate of , we adopted a method from ref . : the temperature stage and metal block were pre - warmed to or and the room - temperature cuvette was inserted .we have confirmed this heating rate by measuring the time it took to reach phase separation at the lcst of from room temperature ; estimate of corresponding error in heating rate . for higher heating rates ,the cuvette was placed on top of a small cardboard box ( to prevent thermal conduction away from the cuvette ) inside a microwave ( delonghi , p80d20el - t5a / h , 800 w , set to `` auto - defrost 100 g '' i.e. 40% ) .the sample was irradiated for 5 s ( or 6 s ) and then quickly transferred to the temperature stage at .we have checked by visual inspection that the sample remained opaque ( i.e. phase separated ) upon transfer from the microwave to the temperature stage .the corresponding heating rate was calculated as , with an estimated error of . during or after emulsification, samples were imaged using fluorescence confocal microscopy .fluorescence excitation was provided by a 488 nm laser ( for fitc ) and a 555 nm laser ( for nile red ) ; emission filters were used as appropriate .the two liquid domains could be distinguished by detecting the fluorescence of the nile red , while the location of the particles could be determined by detecting the fluorescence of the fitc . to extract the bijel channel width from 2d confocal microscopy images, a pixel - based correlation function algorithm was run on the nile - red channel using matlab .the algorithm constructs a radial distribution function by multiplying pairs of pixel intensities , plotting the values against the distance between the pixels , and then taking an average ; the bijel channel width or characteristic length scale is then taken to be the location of the first minimum in the plotted . for the final bijel - channel width , this process was repeated on at least three separate images of the same bijel sample and an average was taken .the standard deviation of measurements made on several images of the same sample was taken as the error .we begin by comparing ( bicontinuous ) pickering emulsions formed by spinodal decomposition of w - l mixtures , containing either nanoparticles ( nps ) or microparticles ( mps ) , upon heating at various rates ( fig .[ fig : figure_introduction](b ) ) . fig .[ fig : figure_confocal_final ] presents a confocal - microscopy overview of the structures obtained for two different particle radii and three different heating rates . in all panels ,the fluorescently labeled particles ( yellow ) appear at the liquid - liquid interface between the water - rich phase ( black ) and the fluorescently labeled lutidine - rich phase ( magenta ) .samples prepared with mps show bicontinuous structures only for fast heating ( fig . [ fig : figure_confocal_final](a ) ) , whereas slow heating results in discrete droplets ( fig .[ fig : figure_confocal_final](c ) ) .in contrast , nps invariably yield a percolating interface with both signs of curvature ( fig . [fig : figure_confocal_final](d - f ) ) , which is an imperative characteristic of a bijel ; note that slow heating with nps ( fig .[ fig : figure_confocal_final](e , f ) ) seems to yield a relatively higher number of thin necks compared to fast heating ( fig .[ fig : figure_confocal_final](d ) ) . ) , stabilized by ( nearly ) neutrally wetting particles ( yellow ) of radius .particle volume fraction is ( a ) 2.6% , ( b c ) 2.2% and ( d f ) 0.7% .estimated relative error in heating rate % ( sec .[ subsec : sample_preparation ] ) .scale bars : .see appendix [ sec : sample_homogeneity ] for sample homogeneity.[fig : figure_confocal_final ] ] next , we compare the kinetics of bijel formation using mps vs nps , to explain the discrepancy in the structures obtained after slow heating ( fig .[ fig : figure_confocal_final ] ) . fig .[ fig : figure_confocal_formation ] shows selected confocal micrographs from time - series recorded during slow heating in the presence of mps vs nps . using mps ( fig .[ fig : figure_confocal_formation](a - d ) ) , the interconnected domains present at have pinched off by , resulting eventually in particle - stabilized droplets .by contrast , when using nps ( fig .[ fig : figure_confocal_formation](e - h ) ) , connectivity is maintained until the structure is arrested , resulting in a bijel .though we observe thinning of necks , we can not find a convincing pinch - off event between fig .[ fig : figure_confocal_formation](g ) and fig .[ fig : figure_confocal_formation](h ) .note that we have also observed droplet formation via secondary phase separation ( appendix [ sec : secondary_nucleation ] ) , but this does not seem to be a pivotal effect , i.e. it can both happen and fail to happen irrespective of bijel formation failing or succeeding .this suggests that mps fail to produce bijels via slow heating , because depercolation via pinch - off events occurs before the interfacial particles jam and lock - in the bicontinuous structure .( white ) during slow heating ( ) .particle volume fraction is ( a d ) 2.1% and ( e h ) 1.8% .note ( c , d ) the depercolation via ( encircled ) pinch - off events and ( e h ) the formation of a bijel ( also verified down to % ( appendix [ sec : np_low_volume_fraction ] ) ) .scale bars : .[fig : figure_confocal_formation ] ] to quantify the coarsening observed in fig .[ fig : figure_confocal_formation ] , we used image analysis to extract the channel width ( sec . [ subsec : characterization_image_analysis ] ) .[ fig : figure_coarsening](a ) shows that the coarsening in the presence of mps is similar to coarsening without particles , until when the bicontinuous structure has failed and mp - stabilized droplets have appeared .coarsening in the presence of nps initially follows the behavior of the w - l mixture without particles , but then levels off .as bijel formation at fails with mps ( and without particles ) , fig .[ fig : figure_coarsening](b ) only shows the coarsening speed in the case of nps ; note that goes through a maximum at and is ( more or less ) 0 after .as discussed below , we refer to the time between the maximum in and its levelling off as the ` jamming time ' . vs time during spinodal demixing upon heating at of a critical mixture of water lutidine without ( w l ) and with ( mps ) radius microparticles or ( nps ) radius nanoparticles , the latter resulting in a bijel .( b ) corresponding coarsening speed for the np data .the dashed vertical lines enclose the jamming time . estimated error in is .[fig : figure_coarsening ] ]having presented our experimental results , we first discuss how bijel formation can fail and how particles with off - neutral wetting can promote bijel failure . simulations of spinodal demixing without particles in 3d , in the viscous hydrodynamic ( vh ) regime relevant here , have shown that depercolation proceeds via thinning of liquid channels followed by pinch - off events .neutrally wetting particles can halt the demixing by attaching to and jamming at the liquid interface .however , off - neutral particles induce a spontaneous curvature when attached to liquid interfaces . this is because they are pushed together as coarsening decreases the interfacial area , while the interparticle contacts are not situated at the liquid interface ( where they would be for ) . as bijelshave empirically been shown to feature average mean curvature , any is expected to disrupt bijel formation . note that secondary nucleation , i.e. the formation of new droplets during spinodal decomposition , was not observed in the above - mentioned simulations , presumably because the quench was instantaneous .secondary nucleation during bijel formation has previously been observed in experiments and attributed to the finite rate of temperature change .however , it has not been suggested that secondary nucleation is responsible for bijel failure , rather it results in droplets inside bijel channels or even droplet - reinforced channels .intriguingly , our results show that bijel formation fails during slow heating with mps , whereas it succeeds with nps that were designed to have similar wetting. the np contact angle could simply be closer to . however , this does not agree with our observation that nps allow bijel formation over a wider range of drying times , which is expected to correspond to a wider range of contact angles .our fluorescence confocal time series suggest that mp bijels fail due to depercolation via pinch - off events .pinch - off events may also occur for nps : they can even be observed in 3d simulations of successful bijel formation .however , we suggest that nps sufficiently suppress the number of pinch - off events to allow successful bijel formation . in order to explain why nps facilitate bijel formation ,we have found it particularly illuminating to consider the particle - size dependence of the `` driving force '' towards ( appendix [ sec : timescales ] ) i.e. away from for bijels .the bending - energy density of the particle - laden interface is where is the effective bending modulus of the interface , so dimensional analysis suggests that and , which is backed by analytical calculations for spheres on a spherical cap . as here , and so , we approximate eq .( [ eq : bending_force ] ) as thus , nps demand a more strongly curved interface ( ) , but the driving force towards that curvature is smaller ( ) . to assess to what degree a smaller driving force can facilitate bijel formation , we compare the disruption time to the jamming time ; bijel formation can succeed if .for the nps , we can estimate the jamming time from fig .[ fig : figure_coarsening](b ) .we define the jamming time as , where is the time at which the jamming starts causing a decrease in the coarsening speed - the peak in fig .[ fig : figure_coarsening](b ) - and is the time just before drops to zero .this gives at a heating rate of .we can not obtain the mp jamming time directly , since mp bijels fail at .however , we expect the jamming dynamics to be dominated by the instantaneous area fraction of interfacial particles , which is independent of particle radius , as long as the final lengthscale is fixed ( appendix [ sec : timescales ] ) . as in fig .[ fig : figure_confocal_formation ] , i.e. is 1.8% vs 0.7% in fig .[ fig : figure_confocal_final ] and , we expect .conversely , we can estimate the mp disruption time , from the time of occurrence of pinch - off in confocal images ( fig .[ fig : figure_confocal_formation](b d ) ) , whereas we can not estimate because bijel formation succeeds here for nps .however , we can predict the scaling of with particle radius by balancing the driving ( eq . ( [ eq : bending_force_scaling ] ) ) and viscous - drag forces , to give where is a bulk fluid viscosity and is the typical length scale of the disruption , which is independent of particle radius ( appendix [ sec : timescales ] ) . given that the mps are larger than the nps , and assuming effects of particle polydispersity and roughness are negligible , the inverse scaling of with radius implies .these time - scale estimates help to explain the observed patterns of bijel failure , i.e. they explain why but . to account for any possible dependence of on the ( final ) channel width , we have also verified that bijel formation is successful with nps at for similar , i.e. for vol-% ( appendix [ sec : np_low_volume_fraction ] ) .it is worth noting here that , based on the scaling proposed in eq .( [ eq : disruption_time ] ) , we had expected that bijel formation would succeed with mps at .this is because it succeeds for similar with nps at ( and the nps are about smaller than the mps ) . as bijel formation with mpsis only barely successful at a higher rate of , this suggests that an additional mechanism might be at play here ; currently planned simulations and experiments may be able to address this in the future .having said that , the mechanical - leeway mechanism proposed here does point in the right direction , i.e. it can explain why bijel formation is more robust when using nps rather than mps .as shown above , slow heating increases the importance of bypassing droplet formation .we have suggested here that nps succeed in this because of their larger mechanical leeway , whereas mps may fail under similar conditions ( fig .[ fig : figure_confocal_final ] ) .this also has technological relevance , since fast and homogeneous heating is challenging to achieve , putting severe restrictions on the choice of sample geometry and starting materials .therefore , reducing particle size could greatly facilitate formulation , especially when tuning particle surface chemistry is non - trivial ( as is often the case ) , even though a naive expectation based on an optimal ( static ) wetting geometry would suggest exactly the opposite trend .this mechanical - leeway mechanism not only applies to bijels but to any liquid template for solid particles .more broadly , leeway mechanisms may well aid any formulation where challenges arise due to tight restrictions on a pivotal parameter , but where the restrictions can be relaxed by changing a more accessible parameter ( here : particle size ) .this has important implications for the development of fabrication routes for advanced functional materials based on external templates .moreover , it is potentially relevant to the design of any soft material with a bespoke architecture by adjusting particle interactions , e.g. crystallization of spheres with a size variation above the hard - sphere crystallization threshold ( ) is possible by changing the ionic strength of the suspending medium so that the interparticle - interaction range is large enough .we have shown that the formation of bicontinuous pickering emulsions ( bijels ) via liquid - liquid demixing is more robust with nanoparticles than with microparticles : a wider range of heating rates can be used .in addition , our results suggest that bijel formation using microparticles fails at low rates because the bicontinuous structure decays into discrete droplets via pinch - off events . to explain our observations ,we have argued that interfacial microparticles with off - neutral wetting induce disruptive curvature , while nanoparticles of similar wetting benefit from a mechanical - leeway mechanism . in short ,smaller particles give a smaller driving force towards disruptive curvature .is grateful to epsrc for funding his phd studentship .is funded by the royal society . j.h.j.t .acknowledges the royal society of edinburgh / bp trust personal research fellowship for funding and the university of edinburgh for awarding a chancellor s fellowship .the authors are also grateful for financial support from epsrc ep / j007404 . thanks to paul clegg , michiel hermes and alexander morozov for useful discussions .in this appendix , we present several fluorescence confocal micrographs of a mp and a np stabilized bijel , to demonstrate sample homogeneity ..[fig : figure_sample_homogeneity_comb_sb100mu ] ]below are confocal micrographs , corresponding to fig . [fig : figure_confocal_formation ] , to illustrate secondary nucleation . during slow heating ( ) .particle volume fraction is ( a , b ) 2.1% and ( c , d ) 1.8% .note that droplets have appeared , presumably due to secondary nucleation , which has previously been observed during slow quenches .scale bars : .[fig : figure_secondary_nucleation ] ]here , we present a confocal micrograph demonstrating successful bijel formation at a heating rate of and a nanoparticle volume fraction of 0.7% ( see caption of fig . [fig : figure_confocal_formation ] and discussion after eq .( [ eq : disruption_time ] ) ) . , stabilized by nanoparticles of radius ( white ) at a volume fraction of 0.7% .scale bar : .[fig : figure_np_low_phi ] ]in this appendix , we obtain approximate scaling relationships for the timescales of jamming and disruption during bijel formation . following canham and helfrich , we start with the bending - energy density of a membrane in which is the bending modulus , the mean curvature , the spontaneous curvature , the gaussian bending modulus and the gaussian curvature .assuming the topology of the surface does not change substantially during the crucial stages of bijel formation , we omit the term : next , we consider the ( generalized ) driving force towards spontaneous curvature . taking as constant over a small membrane patch , & = & \frac{\partial}{\partial h } \left [ 2 \kappa \left ( h - c_0 \right)^2 \right ] \\[4 mm ] & = & -4 \kappa \left ( c_0 - h \right ) \ .\end{array}\ ] ] eq .( [ eq : driving_force ] ) resembles hooke s law for a spring with spring constant and extension .the equilibrium position of the spring is , which is a minimum as ( which is positive for ) .note that it has been shown empirically that the average mean curvature for bijels . in order to understand how the driving force scales with particle size ,we first consider how the spontaneous curvature and the bending modulus scale with . has units of inverse length ( ) and is expected to scale as , which is backed up by analytical calculations for spherical particles on a spherical cap . in that geometry , the result can also be explained using a scaling argument : to keep the angles fixed , including the particle s contact angle , both and the radius of curvature of the spherical cap have to be reduced by the same factor , showing that note that also depends on the particle s contact angle and that for neutrally wetting particles ( ) .the bending modulus has units of energy ( j ) . as it is expected to depend on the w - l interfacial tension ( units ) and on the presence of the particles , one might guess this claim is backed up by analytical calculations of for a close - packed monolayer of spherical particles on a spherical cap . in our experiments , the final bijel - channel width ,so ( eq . ( [ eq : spontaneous_curvature ] ) ) .combined with eqs .( [ eq : driving_force ] ) and ( [ eq : bending_modulus ] ) , this means the driving force scales with : & \approx & -4 \kappa c_0 \\[4 mm ] & \propto & -\gamma_{\mathrm{wl } } r^2 \cdot -\frac{1}{r } \\[4 mm ] & \propto & \gamma_{\mathrm{wl } } r \ .\\[4 mm ] \end{array}\ ] ] in words , for the same binary liquid ( ) and a given off - neutral wetting ( ) , the driving force towards the spontaneous curvature is smaller for nps than it is for mps , which can help explain why fabricating bijels is possible over a larger range of heating rates with nps than with mps . to gain a simple estimate of the disruption time , which is the time it takes for the driving force to cause so much curvature that bijel formation fails , we balance with a viscous drag force : & \propto & \eta \lambda v \ , \\[4 mm ] \end{array}\ ] ] where is viscosity , the typical length scale of the disruption ( independent of particle radius ) and . combining eqs .( [ eq : driving_force_scaling ] ) and ( [ eq : balance_drive_viscosity ] ) , we get which is eq .( [ eq : disruption_time ] ) . alternatively , consider the equation of motion of a damped oscillator ( compare eq .( [ eq : driving_force ] ) ) , in which is a drag coefficient .we assume here that , at least initially , the drag mainly comes from the bulk fluids . in that case , in our experiments , but if then bulk drag may no longer dominate and effects of surface viscosity would have to be considered ( which is outside of the scope of the current paper ) . as the reynolds number here , even when considering motion at the scale of the channel width , we can ignore the inertial term : re - writing eq .( [ eq : overdamped_oscillator_eom_noninertial ] ) results in an expression for the rate of change of curvature \frac{\partial \left ( c_0 - h \right)}{\partial t } & = & -\frac{4 \kappa \left ( c_0 - h \right)}{\mu } \\[4 mm ] \frac{\partial h}{\partial t } & = & \frac{4 \kappa \left ( c_0 - h \right)}{\mu } \end{array}\ ] ] let us denote the time when the interfacial particles start interacting as .as at that time the bijel channel width , we can write for bijel disruption to occur , the curvature has to change by a threshold amount . for the disruption time , we can then write & \propto & \frac{\eta \lambda^2}{\gamma_{\mathrm{wl } } r } \ , \\[4 mm ] \end{array}\ ] ] which is the same as eq .( [ eq : disruption_time_simple ] ) .interestingly , eqs .( [ eq : disruption_time_simple ] ) and ( [ eq : disrupt_time ] ) suggest that lower quench rates could be used when using high - viscosity fluids ( larger ) .it has been reported that the binary liquid nitromethane - ethanediol is more forgiving in bijel fabrication than the w - l system ( the viscosity of ethanediol is 16 times larger than for water ) .consider a bijel surface of area , i.e. the area of the liquid - liquid interface between the two channels is decreasing during coarsening .then the 2d packing fraction of particles on is with the particle - interface cross - sectional area and the number of interfacial particles . here, we assume that both and are constant during the crucial ( jamming ) stages of bijel formation , for there is hardly any area left on for new particles to attach to .( [ eq : interface_packing_fraction ] ) still holds for the bijel in its final i.e. jammed state , so n a_{\mathrm{wl}}(\theta ) & = & \phi_{\mathrm{f } } a_{\mathrm{f } } \ , \\[4 mm ] \end{array}\ ] ] which leads to \ ] ] as it is rather than that is typically reported from simulations and experiments , we write in which is a geometrical pre - factor and is the total volume of the bijel channel ( which is constant during the phase separation of a symmetric binary liquid ) .combining eq .( [ eq : interface_packing_final ] ) with ( [ eq : bijel_va ] ) gives & = & \frac{\phi_{\mathrm{f}}}{l_{\mathrm{f } } } l(t ) \ , \\[4 mm ] \end{array}\ ] ] where we have assumed that is constant i.e. the topology of the bijel does not change substantially during the final stages of ( successful ) formation . if is the packing fraction at which interfacial particles start interacting , thereby affecting the phase separation , then & \approx & \left ( \frac{\phi_{\mathrm{f}}}{l_{\mathrm{f } } } \right ) v_{l } \left ( t_{\mathrm{f } } - t_{\mathrm{in } } \right ) \\[4 mm ] \delta t_{\mathrm{j } } & = & t_{\mathrm{f } } - t_{\mathrm{in } } \approx \left ( 1 - \frac{\phi_{\mathrm{in}}}{\phi_{\mathrm{f } } } \right ) \left ( \frac{l_{\mathrm{f}}}{v_{l } } \right ) \ , \\[4 mm ] \end{array}\ ] ] where in the second line we have used , which is valid in the relevant phase - separation regime for bijel formation ( viscous - hydrodynamic ) .note that eq .( [ eq : jamming_time ] ) can explain several observations .first , the larger , the longer the jamming time , which may help explain the empirical upper limit to bijel channel width .secondly , the larger the coarsening speed , the shorter the jamming time . as increases with heating rate , through its dependence on the temperature - dependent interfacial tension , this may help explain why heating faster facilitates successful bijel formation ( even for mps ) .40ifxundefined [ 1 ] ifx#1 ifnum [ 1 ] # 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1'''' [ 0]secondoftwosanitize [ 0 ] + 12$12 & 12#1212_12%12[1][0] * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) in _ _ ( , ) pp . * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) _ _ ( , , ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) _ _ ( , , ) _ _ ( , , ) * * , ( ) * * , ( )
|
we demonstrate that the formation of bicontinuous emulsions stabilized by interfacial particles ( bijels ) is more robust when nanoparticles rather than microparticles are used . emulsification via spinodal demixing in the presence of nearly neutrally wetting particles is induced by rapid heating . using confocal microscopy , we show that nanospheres allow successful bijel formation at heating rates two orders of magnitude slower than is possible with microspheres . in order to explain our results , we introduce the concept of mechanical leeway i.e. nanoparticles benefit from a smaller driving force towards disruptive curvature . finally , we suggest that leeway mechanisms may benefit any formulation in which challenges arise due to tight restrictions on a pivotal parameter , but where the restrictions can be relaxed by rationally changing the value of a more accessible parameter .
|
in this paper , we consider the following mean - field game ( mfg ) model where , is a given kernel , is a given potential and are given parameters .the unknowns are functions and the number .we study the existence of smooth solutions for and analyze their properties and solution methods .mfgs theory was introduced by j - m .lasry and p - l .lions in and by m. huang , p. caines and r. malham in to study large populations of agents that play dynamic differential games .mathematically , mfgs are given by the following system where is the distribution of the population at time , and is the value function of the individual player , and is the terminal time .furthermore , is the hamiltonian of the system , where or or , and is the diffusion parameter .finally , are given initial - terminal conditions .suppose is the legendre transform of .then , formally , are the optimality conditions for a population of agents where each agent aims to minimize the action where the infimum is taken over all progressively measurable controls , and trajectories are governed by for a standard -dimensional brownian motion .assume that are driven by mutually independent brownian motions .indeed , the first equation in is the hamilton - jacobi equation for the value function .furthermore , optimal velocities of agents are given by thus the second equation in which is the corresponding fokker - planck equation. rigorous derivations of in various contexts can be found in and references therein .actions of the total population affect an individual agent through the dependence of and on .the type of the dependence of and on is called the _ coupling _ , and it can be either local , global or mixed .spatial preferences of agents are encoded in the dependence of and .our problem of interest is the 1-dimensional , stationary , first - order version of with hamiltonian since seminal papers a substantial amount of research has been done in mfgs .classical solutions were studied extensively both in stationary and non - stationary settings in and in , respectively .weak solutions were addressed in for time - dependent problems and in for stationary problems .numerical methods can be found in .nevertheless , most of the previous work concerns problems where hamiltonian does not have singularity at .the problems where hamiltonian has singularity at , such as in , are called _ congestion _ problems .the reason is that the lagrangian corresponding to in is and in the view of agents pay high price for moving at high speeds in dense areas .congestion problems were previously studied in .uniqueness of smooth solutions was established in .existence of smooth solutions for stationary second - order local mfg with quadratic hamiltonian was established in .short - time existence and uniqueness of smooth and weak solutions for time - dependent second - order local mfgs were addressed in and , respectively .analysis of stationary first - order local mfgs in 1-dimensional setting is performed in .problems on graphs are considered in .mfg models with density constraints ( hard congestion ) and local coupling are addressed in ( second - order case ) and ( first - order case ) . to our knowledge ,existence of smooth solutions for stationary first - order mfgs with global coupling has not been studied before .one of the main tools of analysis in mfgs theory is the method of a priori estimates .see and references therein for a detailed account on a priori - estimates methods in mfgs . here , we take a different route .firstly , using the 1-dimensional structure of the problem , we reduce it to an equation with only and as unknowns .indeed , from the second equation in we have that where is some constant that we call _ current_. therefore , can be written in an equivalent form from here on , we do not differentiate between and .moreover , we refer to as the original problem .note that , as a solution parameter , in is replaced by in .we discuss the relation between and in section [ sec : current ] .following , we call the _ current formulation _ of .there are two possibilities : and .we study the simpler case only in section [ sec : current ] and focus on the case afterwards .our main observation is that when is a trigonometric polynomial solutions of have a certain structure in terms of unknown fourier coefficients that satisfy a related equation .more precisely , for denote by .furthermore , for denote by the antiderivative of ; that is , next , let be the set of all points such that finally , for define then , we prove the following theorem .[ thm : free for trig ] suppose that is a trigonometric polynomial ; that is , for some and . then , if satisfies and the system has a unique smooth solution . moreover , the solution of is given by formulas and where is the unique solution of the system where is given by . assumptions , are natural monotonicity assumptions for the coupling , and we discuss them in section [ sec : assum ] . when has the form these assumptions are equivalent to and , respectively ( see section [ sec : trig ] ) .theorem [ thm : free for trig ] reduces the a priori - infinite - dimensional problem to a finite dimensional problem when the kernel is a trigonometric polynomial .also , is concave , so corresponds to finding a root of a monotone mapping which is advantageous from the numerical perspective .this reduction is even more substantial , when the kernel is a symmetrical trigonometric polynomial ; that is , for . in the latter case ,is equivalent to a concave optimization problem .more precisely , we obtain the following corollary .[ crl : sym_case ] suppose that is a symmetrical trigonometric polynomial ; that is , for some and .then , if satisfies and the system has a unique smooth solution . moreover , the solution of is given by formulas and where is the unique solution of the optimization problem additionally , we find closed form solutions in some special cases .[ prp : alpha=1 ] assume that and are first - order trigonometric polynomials ; that is where and .then , define as follows : where is the unique number that satisfies the following equation then the pair , where is the unique solution of .besides the trigonometric - polynomial case we also study for general . in the latter case, we approximate by trigonometric polynomials and recover the solution of as the limit of solutions of approximate problems .more precisely , we prove the following theorem .[ thm : main general ] let and satisfies , . then , there exists a sequence of trigonometric polynomials such that * satisfies and for all , * .furthermore , for denote by the solution of corresponding to ( the existence of this solution is guaranteed by theorem [ thm : free for trig ] ) .then , there exists such that consequently , is the unique smooth solution of corresponding to . in combination with preceding results this previous theorem provides a convenient method for numerical calculations of solutions of .we also present a possible way to apply our methods to more general one - dimensional mfg models .we consider the following generalization of in , is a given kernel , , is a given hamiltonian , and is a given coupling , where can be a functional space or .we discuss , formally , how our techniques apply to models like .the paper is organized as follows . in section [ sec : assum ] we present the main assumptions and notation . in section[ sec : current ] we study for the case .next , in section [ sec : trig ] we analyze when is a trigonometric polynomial and prove theorem [ thm : free for trig ] , corollary [ crl : sym_case ] and theorem [ prp : alpha=1 ] . in section [ sec : stability ]we analyze for a general and prove theorem [ thm : main general ] .in section [ sec : num ] we present some numerical experiments .finally , in section [ sec : extensions ] we discuss possible extensions of our results and a future work .throughout the paper we assume that .moreover , we always assume that for all and denote by (x)=\int\limits_{{{\mathbb{t}}}}g(x - y)m(y)dy ] and plays an essential role in our analysis . in general , monotonicity of the couplingis fundamental in the regularity theory for mfgs : system degenerates in several directions if the coupling is not monotone . in the view of monotonicitymeans that agents prefer sparsely populated areas .see and for a systematic study of non - monotone mfgs .assumption is a technical assumption .it is not restrictive since one can always modify the kernel by adding a positive constant .furthermore , we assume that this , also , is a natural assumption for mfgs from the regularity theory perspective . the , now standard , uniqueness proof for mfg systems in is valid only for in this range .this is a strong indication of degeneracy for outside of this range ( which is observed and discussed in detail in ) .in fact , our methods also reflect these limitations in a natural way .as we have pointed out in the introduction , can be reduced to by eliminating from the second equation .the analysis of is completely different for the case and for the case .in fact , the case is much simpler to analyze . nevertheless ,it is more degenerate . in this section ,we discuss the case .firstly , we observe that can occur only when . recall that in this paper we are concerned only with smooth solutions .therefore , if is a solution of we obtain and hence , if and only if . furthermore , if reduces to at this point , we drop assumptions and because they are irrelevant .suppose that have following fourier expansions then , is equivalent to therefore , we get that and hence , formally , if and are given , we obtain that is given by and where .nevertheless , there are several issues in the previous analysis .firstly , may fail to have solutions or may have infinite number of solutions . if for some and then and do not have solutions . on the other handif then can be chosen arbitrarily , so and may have infinite number of solutions .thus , if for some degenerates in different ways when . furthermore , if for all then , at least formally , is given by . herewe face two potential problems .first , one has to make sense of the formula .in other words , the series in may not be summable in any appropriate sense .moreover , summability of is a delicate issue and strongly depends on the relation between and .additionally , even if the series converge to a smooth function , we still have the necessary condition and that might fail depending on and . for instance , if is such that for all , and is such that for all we get that therefore , if and only if .hence , if the latter is violated does not have smooth solutions . thus , existence of smooth , positive solutions for depends on peculiar properties of and .this is quite different in the case , where obtains smooth , positive solutions under general assumptions on and .from here on , we assume that . in this section , our main goal is to prove theorem [ thm : free for trig ] , corollary [ crl : sym_case ] and theorem [ prp : alpha=1 ] .we break the proof of theorem [ thm : free for trig ] into three steps .firstly , we show that is equivalent to - proposition [ thm : equiv ] . secondly , we prove that has at most one solution - proposition [ prp : system reduced uniqueness ] .and thirdly , we show that has at least one solution - proposition [ prp : existence reduced system ] .we use a short - hand notation for a vector , where and . for every denote by . herewe perform the analysis in terms of fourier coefficients of .hence , we formulate assumptions , in terms of these coefficients . for a given by the assumptionis equivalent to furthermore , the assumption is equivalent to let and a straightforward computation yields the rest of the proof is evident . from here on , we assume that and hold .[ thm : equiv ] let be a solution of .then , is given by formulas and for some that is a solution of .conversely , if is a solution of the system , then defined by and is a solution for . in our analysiswe assume that for all .this assumption is not restrictive and the results are valid even if for some . indeed ,if , then in there will be no terms with and and in the subsequent analysis we just have to omit the trigonometric monomials and .first , we prove the direct implication .suppose is a solution of .a straightforward calculation yields where therefore , from we obtain which is equivalent to .the coefficients in the previous equation are given by the formulas since we obtain that .furthermore , from and we obtain that and .next , we plug the expression for in , and from we obtain the following system furthermore , note that for .therefore , can be written as where . but this previous system is equivalent to .the proof of the converse implication is the repetition of previous arguments in the reversed order .next , we study some properties of and .[ lma : upperphi ] the following statements hold .* is convex and open . ** for all * is strictly concave . moreover, for all and we have that with equality if and only if .* is strictly monotone ; that is , for all with equality if and only if . _i. _ this statement is evident ._ this statement is evident ._ we obtain by a straightforward calculation .equation follows from by an algebraic manipulation . moreover , the equality in holds if and only if which implies . +_ v. _ for ]. furthermore , denote by .we have that ) ] , unless .hence , with equality if and only if .we complete the proof by noting that [ prp : system reduced uniqueness ] if are solutions of , then .let for .then , we have that for . hence , on the other hand , from we have that , so and . [lma : omega ] for every there exists a unique such that furthermore , . fix a point .denote by where .firstly , we show that denote by then we have that . hence , =0 , and where .therefore , we have that for we have that so finally , the mapping is decreasing and so there exists unique such that holds .regularity of follows from the implicit function theorem .[ prp : existence reduced system ] let be the following function : where and is the function from lemma [ lma : omega ] .then , is bounded by below and coercive .consequently , the minimization problem admits at least one solution .moreover , if is a critical point for , then is a solution of .therefore , admits at least one solution .firstly , we show that from is coercive and bounded by below . evidently , .next , from we have that for all for .furthermore , we use the elementary inequality and obtain \\ \nonumber & + \frac{1}{2}\sum\limits_{k=1}^{n}\left[\frac{1}{2}\left(\frac{p_k}{p_k^2+q_k^2}b_{k}-\frac{q_k}{p_k^2+q_k^2}a_{k}\right)^2 - 1\right]\\ & = \frac{1}{4}\sum\limits_{k=1}^{n}\frac{a_k^2+b_k^2}{p_k^2+q_k^2}-n,\end{aligned}\ ] ] for all . therefore , is coercive .now , we prove that for every critical point of the point is a solution of . for denote by then , for .next , by differentiating we obtain for . now , suppose is a minimizer of .then , we have that where , and furthermore , we have that hence , from we obtain that where . on the other hand ,from we have that therefore , there is an equality in the previous equation and from we obtain that or that , equivalently , for .the latter precisely means that is a solution of .now we are in the position to prove theorem [ thm : free for trig ] .we have that is equivalent to by proposition [ thm : equiv ] .furthermore , admits a solution by theorem [ prp : existence reduced system ] .moreover , is unique by proposition [ prp : system reduced uniqueness ] .hence , admits a unique solution given by and .next , we prove corollary [ crl : sym_case ] . by theorem[ thm : free for trig ] we have that obtains unique solution , , given by and , where is the unique solution of . since has the form we have that for .therefore , can be written as furthermore , by lemma [ lma : upperphi ] is strictly concave on ( see for the definition of ) , so the function is also strictly concave on .hence , is the unique maximum of . finally , we prove theorem [ prp : alpha=1 ] .firstly , note that if is a solution of with , then must necessarily have the form .consequently , leads to a direct calculation yields the following identities using in and taking into the account that , we obtain which can be equivalently written as we eliminate and in the second and third equations and find from the fourth equation .it is algebraically more appealing to put .then , a straightforward calculation yields .since we have that . moreover , from the fourth equation in the previous system we have that , so or .note that the left - hand - side of is increasing function in for , and it is equal to 0 at and to at .therefore , for arbitrary choices of there is a unique such that holds .this is coherent with the fact that obtains a unique smooth solution .moreover , is a cubic equation .hence , formulas in are explicit .in this section we prove theorem [ thm : main general ] .we divide the proof into two steps .first , we prove that solutions of are stable under perturbation of the kernel .second , we show that arbitrary kernel can be approximated by suitable trigonometric polynomials .the uniqueness of the solution for follows from the uniqueness of the solution of ( see ) . + * part 1 . stability . *suppose that are such that moreover , assume that for each has a solution , , corresponding to the kernel .we aim to prove that there exists such that holds and is the solution of corresponding to the kernel . note that in this part of the proof we do not assume that are trigonometric polynomials and that they satisfy , .we need these assumptions in the second part of the proof to guarantee the existence of solutions .we are going to show that families are uniformly bounded and equicontinuous .denote by where we have that next , denote by for some .then , we have that , and therefore , furthermore , for we have that therefore , or for . furthermore , denote by then , we have that furthermore , for every we have that firstly , if we plug in in and use , we get that for all .secondly , yields that the family is uniformly lipschitz which in turn yields ( in combination with ) that the family is also uniformly lipschitz .since families are uniformly bounded , we get that is a bounded sequence .then , we can assume that there exists such that through a subsequence .moreover , we obtain through the same subsequence . from the previous equations , we obtain that solves for the kernel .next , must have a unique solution because it is equivalent to that can have at most one solution ( see ) .hence , the limit , , is the same for all subsequences .therefore , is valid through the whole sequence . +* part 2 . approximation .* suppose satisfies and are satisfied .we formally expand in fourier series denote by and the truncated fourier series .furthermore , let be the corresponding cesro mean ; that is , then by fejr s theorem ( see theorem 1.10 in ) we have that next , satisfies , so and for . therefore, we have that so also satisfy , for all .now , we can complete the proof of theorem [ thm : main general ] .we approximate using part 2 and conclude using part 1 .here , we numerically solve for different types of kernels .we present three cases .first , we consider that is a non - symmetric trigonometric polynomial .second , we consider that is a symmetric trigonometric polynomial . and third , we consider that is periodic but that is not a trigonometric polynomial . during the whole discussion in this sectionwe assume that this choice of parameters in is random and robustness of our calculations does not depend on a particular choice of parameters . by theorem [ thm : free for trig] we have that for a given non - symmetric trigonometric polynomial the solution of has the form , where the vector is the unique solution of .furthermore , we define where .then , solutions of coincide with minimums of .accordingly , we find the solution of by numerically solving the optimization problem we devise our algorithm in wolfram mathematica^^ language and use the built - in optimization function ` findminimum ` to solve . as an example , we consider the kernel we denote by the corresponding numerical solution of .we first find by solving and using and .next , we use to find . finally , to estimate the accuracy of numerical solutions we introduce the error function we plot and in fig .[ fig : g1andv ] , and in fig .[ fig : u1andm1 ] and in fig .[ fig : er1 ] . and the potential .,width=377 ] and .,width=377 ] .,width=377 ] by corollary [ crl : sym_case ] we have that for a given symmetric trigonometric polynomial the solution of has the form , where the vector is the unique solution of . as before ,we use ` findminimum ` to solve . as an example, we consider the kernel analogous to the previous case we denote by the numerical solution of corresponding to . furthermore , we denote by the error function corresponding to .we plot and in fig .[ fig : g2andv ] , and in fig .[ fig : u2andm2 ] and in fig .[ fig : er2 ] . and the potential .,width=377 ] and .,width=377 ] .,width=377 ] if is not a trigonometric polynomial we first approximate it by its truncated fourier series and then apply one of the previous solution methods . as an example we take as before , we denote by and the numerical solution of and the error function corresponding to , respectively .we plot and in fig .[ fig : g3andv ] , and in fig .[ fig : u3andm3 ] and in fig .[ fig : er3 ] . and the potential .,width=377 ] and .,width=377 ] .,width=377 ]here , we discuss how our methods can be applied to other one - dimensional mfg system such as .denote by , be the legendre transform of ; that is , then , if satisfies suitable conditions , we have that for all and there is equality in if and only if as before , second equation in yields for some constant . therefore , using we find which we plug - in to the first equation in and obtain the following system next , one can attempt to study first when is a trigonometric polynomial and then approximate the general case . as before ,when is a trigonometric polynomial the expression is always a trigonometric polynomial .therefore , we have that for some .suppose is such that the left - hand - side expression of is invertible in with inverse .then , yields the following ansatz thus , one can search for the solution of in the form with undetermined coefficients . therefore , by plugging in we obtain a finite - dimensional fixed point problem for . if this fixed point problem has good structural properties ( such as ) for a concrete model of the form , one may analyze this model by methods developed here . p. cardaliaguet .weak solutions for first order mean field games with local coupling . in _ analysis and geometry in control theory and its applications _, volume 11 of _ springer indam ser ._ , pages 111158 .springer , cham , 2015 .m. huang , p. e. caines , and r. p. malham .large - population cost - coupled lqg problems with nonuniform agents : individual - mass behavior and decentralized -nash equilibria ., 52(9):15601571 , 2007 .
|
here , we study a one - dimensional , non - local mean - field game model with congestion . when the kernel in the non - local coupling is a trigonometric polynomial we reduce the problem to a finite dimensional system . furthermore , we treat the general case by approximating the kernel with trigonometric polynomials . our technique is based on fourier expansion methods .
|
in this note we propose a vectorized implementation of the non - parametric bootstrap for statistics based on sample moments .our approach is based on the multinomial sampling formulation of the non - parametric bootstrap .this formulation is described in the next section , but , in essence , follows from the fact that the bootstrap distribution of any sample moment statistic ( generated by sampling data points with replacement from a population of values , and evaluating the statistic on the re - sampled version of the observed data ) can also be generated by weighting the observed data according to multinomial category counts sampled from a multinomial distribution defined over categories ( i.e. , data points ) and assigning probability to each one of them .the practical advantage of this multinomial sampling formulation is that , once we generate a matrix of bootstrap weights , we can compute the entire vector of bootstrap replications using a few matrix multiplication operations .the usual re - sampling formulation , on the other hand , is not amenable to such vectorization of computations , since for each bootstrap replication one needs to generate a re - sampled version of the data .vectorization is particularly important for matrix - oriented programming languages such as r and matlab , where matrix / vector computations tend to be faster than scalar operations implemented in a loop .this note is organized as follows .section 2 : ( i ) presents notation and background on the standard data re - sampling approach for the non - parametric bootstrap ; ( ii ) describes the multinomial sampling formulation of the bootstrap ; and ( iii ) explains how to vectorize the bootstrap calculations .section 3 reports a comparison of computation times ( in r ) required by the vectorized and standard approaches , when bootstrapping pearson s sample correlation coefficient in real and simulated data sets .finally , section 4 presents some final remarks , and point out that the bayesian bootstrap computations can also be easily vectorized .let represent a random variable distributed according to an unknown probability distribution , and let be an observed random sample from .the goal of the bootstrap is to estimate a parameter of interest , , based on a statistic .let represent the empirical distribution of the observed data , , assigning probability to each observed value , .a bootstrap sample , , corresponds to a random sample of size draw from .operationally , sampling from is equivalent to sampling data points with replacement from the population of objects .the star notation indicates that is not the actual data set but rather a re - sampled version of .the sampling distribution of estimator is then estimated from bootstrap replications of .now , consider the estimation of the first moment of the unknown probability distribution , on the basis of the observed data .if no further information ( other than the observed sample ) is available about , then it follows that the best estimator of is the plug - in estimate ( see page 36 of ) , and the bootstrap distribution of is generated from bootstrap replications of .algorithm [ alg : usualboot ] summarizes the approach ._ for : _ * draw a bootstrap sample from the empirical distribution of the observed data , that is , sample data points with replacement from the population of objects .* compute the bootstrap replication .alternatively , let represent the number of times that data point appears in the bootstrap sample , and .then , the category counts , , of the bootstrap sample are distributed according to the multinomial distribution , where the vector has length .now , since it follows that the bootstrap replication of the first sample moment of the observed data can we re - expressed , in terms of the bootstrap weights as , so that we can generate bootstrap replicates using algorithm [ alg : multiboot ] ._ for : _ * draw a bootstrap count vector . *compute the bootstrap weights . *compute the bootstrap replication .the main advantage of this multinomial sampling formulation of the non - parametric bootstrap is that it allows the vectorization of the computation .explicitly , algorithm [ alg : multiboot ] can be vectorized as follows : 1 .draw bootstrap count vectors , , from ( [ eq : boot.mult ] ) , using a single call of a multinomial random vector generator ( e.g. , ` rmultinom ` in r ) .2 . divide the sampled bootstrap count vectors by in order to obtain a bootstrap weights matrix , .3 . generate the entire vector of bootstrap replications , in a single computation .it is clear from equation ( [ eq : main.connection ] ) that this multinomial sampling formulation is available for statistics based on any arbitrary sample moment ( that is , statistics defined as functions of arbitrary sample moments ) .for instance , the sample correlation between data vectors and , is a function of the sample moments , and the bootstrap replication , \ , \big[n \sum_{i } y_i^{\ast 2 } - ( \sum_{i } y_i^\ast)^2\big]}}~,\ ] ] can be re - expressed in terms of bootstrap weights as , \ , \big[\sum_i w_i^\ast \ , y_i^2 - ( \sum_{i } w_i^\ast \ , y_i)^2\big]}}~,\ ] ] and in vectorized form as , \bullet \big[({\boldsymbol{y}}^2)^t { \boldsymbol{w}}^\ast - ( { \boldsymbol{y}}^t { \boldsymbol{w}}^\ast)^2 \big]}}~ , \label{eq : vec.cor}\ ] ] where the operator represents the hadamard product of two vectors ( that is , the element - wise product of the vectors entries ) , and the square and square root operations in the denominator of ( [ eq : vec.cor ] ) are also performed entry - wise .in this section , we illustrate the gain in computational efficiency achieved by the vectorized multinomial sampling bootstrap , relative to two versions the standard data re - sampling approach : ( i ) a strait forward version based on a ` for ` loop ; and ( ii ) a more sophisticated version , implemented in the ` bootstrap ` r package , where the ` for ` loop is replaced by a call to the ` apply ` r function . in the following ,we refer to these two versions as loop " and apply " , respectively .we bootstrapped pearson s sample correlation coefficient using the american law school data ( page 21 of ) provided in the ` law82 ` data object of the ` bootstrap ` r package .the data is composed of two measurements ( class mean score on a national law test , lsat , and class mean undergraduate grade point average , gpa ) on the entering class of 1973 for american law schools . figure [ fig : example1 ] presents the results .the top left panel of figure [ fig : example1 ] shows the time ( in seconds ) required to generate bootstrap replications of , for varying from 1,000 to 1,000,000 .the red , brown , and blue lines show , respectively , the computation time required by the apply " , loop " , and the vectorized multinomial sampling approaches .the center and bottom left panels show the computation time ratio of the data re - sampling approach versus the vectorized approach as a function of .the plots clearly show that the vectorized bootstrap was considerably faster than the data re - sampling implementations for all tested .the right panels show the distributions for the three bootstrap approaches based on .figure [ fig : example2 ] presents analogous comparisons , but now focusing on a subset of samples from the american law school data ( page 19 of ) , provided in the ` law ` data object of the ` bootstrap ` r package .this time , the vectorized implementation was remarkably faster than the data re - sampling versions .the center and bottom left panels of figure [ fig : example2 ] show that the vectorized implementation was roughly 50 times faster than the re - sampling versions , whereas in the previous example it was about 8 times faster ( center and bottom left panels of figure [ fig : example1 ] ) .the performance difference observed in these two examples suggests that the gain in speed achieved by the vectorized implementation decreases as a function of the sample size . in order to confirm this observation , we used simulated data to compare the bootstrap implementations across a grid of 10 distinct sample sizes ( varying from 15 to 915 ) for equal to 10,000 , 100,000 , and 1,000,000 .figure [ fig : example3 ] reports the results .the left panels of figure [ fig : example3 ] show computation times as a function of increasing sample sizes . in all cases tested the vectorized implementation ( blue line ) outperformed the apply " version ( red line ) , whereas the loop " version ( brown line ) outperformed the vectorized implementation for large sample sizes ( but was considerably slower for small and moderate sample sizes ) .the central and right panels show the computation time ratios ( in log scale ) comparing , respectively , the loop " vs vectorized and the apply " vs vectorized implementations .the horizontal line is set at zero and represents the threshold below which the data re - sampling approach outperforms the vectorized implementation .note that log time ratios equal to 4 , 3 , 2 , 1 , and 0.5 , correspond to speed gains of 54.60 , 20.09 , 7.39 , 2.72 , and 1.65 , respectively .all the timings in this section were measured on an intel core i7 - 3610qm ( 2.3 ghz ) , 24 gb ram , windows 7 enterprize ( 64-bit ) platform .in this note we showed how the multinomial sampling formulation of the non - parametric bootstrap can be easily implemented in vectorized form .we illustrate the gain in computational speed ( in the r programming language ) using real and simulated data sets .our examples provide several interesting insights .first , the re - sampling implementation based on the ` for ` loop was generally faster than the implementation provided by the ` bootstrap ` r package , which employs the ` apply ` in place of the ` for ` loop ( compare the red and brown curves on the top left panel of figures [ fig : example1 ] and [ fig : example2 ] , and on the left panels of figure [ fig : example3 ] ) .this result illustrates the fact that ` apply ` is not always faster than a ` for ` loop ( see for further discussion and examples ) .second , the vectorized implementation outperformed the data re - sampling implementation provided in the ` bootstrap ` r package in all cases tested ( left panels in figures [ fig : example1 ] and [ fig : example2 ] , and right panels in figure [ fig : example3 ] ) .third , the gain in speed achieved by the vectorized implementation decreases as a function of the sample size ( figure [ fig : example3 ] ) .this decrease is likely due to the increase in memory requirements for generating and performing operations in the larger bootstrap weight matrices associated with the larger sample sizes .we point out , however , that even though optimized blas ( basic linear algebra subprograms ) libraries could potentially increase the execution speed of our vectorized operations in large matrices / vectors , our examples still show remarkable / considerable gains for small / moderate sample sizes even without using any optimized blas library ( as illustrated by figures [ fig : example2 ] and [ fig : example1 ] ) . for the sake of clarity , the exposition in section 2 focused on statistics based on sample moments of the observed data , .we point out , however , that the multinomial sampling formulation of the bootstrap is available for any statistic satisfying the more general relation , where the sample moment represents the particular case , .clearly , the left hand side of equation ( [ eq : general ] ) still represents a bootstrap replication of the first sample moment of the transformed variable .the multinomial sampling formulation of the non - parametric bootstrap is not new .it is actually a key piece in the demonstration of the connection between the non - parametric bootstrap and bayesian inference , described in and in section 8.4 of , where the non - parametric bootstrap is shown to closely approximate the posterior distribution of the quantity of interest generated by the bayesian bootstrap .this close connection , also implies that the bayesian bootstrap can be easily implemented in vectorized form . as a matter of fact , instead of generating the bootstrap weights from bootstrap count vectors sampled from a multinomial distribution , the bayesian bootstrap samples the weights directly from a distribution .we point out , however , that , to the best of our knowlege , the multinomial sampling formulation has not been explored before for vectorizing bootstrap computations .s original , from statlib and by rob tibshirani .r port by friedrich leisch ( 2014 ) bootstrap : functions for the book an introduction to the bootstrap " .r package version 2014.4 .
|
in this note we propose a vectorized implementation of the non - parametric bootstrap for statistics based on sample moments . basically , we adopt the multinomial sampling formulation of the non - parametric bootstrap , and compute bootstrap replications of sample moment statistics by simply weighting the observed data according to multinomial counts , instead of evaluating the statistic on a re - sampled version of the observed data . using this formulation we can generate a matrix of bootstrap weights and compute the entire vector of bootstrap replications with a few matrix multiplications . vectorization is particularly important for matrix - oriented programming languages such as r , where matrix / vector calculations tend to be faster than scalar operations implemented in a loop . we illustrate the gain in computational speed achieved by the vectorized implementation in real and simulated data sets , when bootstrapping pearson s sample correlation coefficient .
|
supply chains are networks of firms ( supply chain members ) which act in order to deliver a product to the end consumer .supply chain members are concerned with optimizing their own objectives and this results in a poor performance of the supply chain . in other words local optimum policies of members do not result in a global optimum of the chain and they yield the tendency of replenishment orders to increase in variability as one moves up stream in a supply chain .this effect was first recognized by forrester in the middle of the twentieth century and the term of bullwhip effect was coined by procter & gamble management .the bullwhip effect is considered harmful because of its consequences which are ( see e.g. buchmeister et al . ) : excessive inventory investment , poor customer service levels , lost revenue , reduced productivity , more difficult decision - making , sub - optimal transportation , sub - optimal production etc . this makes it critical to find the root causes of the bullwhip effect and to quantify the increase in order variability at each stage of the supply chain . in the current state of researchseveral main causes of the bullwhip effect are considered ( see e.g. lee et al . and ) : demand forecasting , non - zero lead time , supply shortage , order batching , price fluctuation and lead time forecasting ( see michna and nielsen ) . to decrease the variance amplification in a supply chain ( i.e. to reduce the bullwhip effect ) we need to identify all factors causing the bullwhip effect and to quantify their impact on the effect .many researchers have assumed a deterministic lead time and studied the influence of different methods of demand forecasting on the bullwhip effect such as simple moving average , exponential smoothing , and minimum - mean - squared - error forecasts when demands are independent identically distributed or constitute integrated moving - average , autoregressive processes or autoregressive - moving averages ( see graves , lee et al . , chen et al . and , alwan et al . , zhang and duc et al .moreover they quantify the impact of a deterministic lead time on the bullwhip effect and it follows from their work that the lead time is one of the major factors influencing the size of the bullwhip effect in a given supply chain .stochastic lead times were intensively investigated in inventory systems see bagchi et al . , hariharan and zipkin , mohebbi and posner , sarker and zangwill , song and , song and zipkin and and zipkin .most of these works consider the so - called exogenous lead times that is they do not depend on the system e.g. the lead times are independent of the orders or the capacity utilization of a supplier .moreover these articles studied how the variable lead times affect the control parameter , the inventory level or the costs .one can investigate the so - called endogeneous lead times that depends on the system .this is analyzed in so and zheng showing the impact of endogeneous lead times on the amplification of the order variance and has been done by simulation .recently the impact of stochastic lead times on the bullwhip effect is intensively investigated .the main aim of this article is to review papers devoted to stochastic lead times in supply chains in the context of the bullwhip effect especially those which quantify the effect .moreover we modify a model where stochastic lead times and lead time demand forecasting are considered . in this modelwe find an analytical expression for the bullwhip effect measure which indicates that the distribution of a lead time ( the probability of the longest lead time and its expectation and variance ) and the delay parameter of the lead time demand prediction are the main factors of the bullwhip phenomenon . in tab .[ tabpapers ] we collect all the main articles in which models on the bullwhip effect with stochastic lead times are provided ( except the famous works of chen et al . and where deterministic lead time is considered and some of them analyze the effect using simulation ) ..articles on the impact of lead time on the bullwhip effect [ cols="^,^,^,^",options="header " , ]the main conclusion from our research is that stochastic lead times boost the bullwhip effect .more precisely we deduce from the presented models that the effect is amplified by the increase of the expected value , variance and the probability of the largest lead time .moreover the delay parameter of the prediction of demands , the delay parameter of the prediction of lead times and the delay parameter of the prediction of lead time demands depending on the model are crucial parameters which can dampen the bullwhip effect .we must also notice that in all the presented models the bullwhip effect measure contains the term ( see th .[ thltdp1 ] , [ duc1 ] , [ duc2 ] and [ bmmt ] ) and except the model of duc et al . this term can be killed by the prediction ( going with or to ) .the future research on quantifying the bullwhip effect has to be aimed at stochastic lead times with a different structure than i.i.d . and dependence between lead times and demands .one can investigate for example ar(1 ) structure of lead times and the influence of the dependence between lead times and demands on the bullwhip effect .another challenge in bullwhip modeling is the problem of lead time forecasting and its impact on the bullwhip effect .a member of a supply chain placing an order must forecast lead time to determine an appropriate inventory level in order to fulfill its customer orders in a timely manner which implies that lead times influence orders . in turn orders can impact lead times. this feedback loop can be the most important factor causing the bullwhip effect which has to be quantified and in our opinion this seems to be the most important challenge and the most difficult problem in bullwhip modeling .another topic is the combination of methods for lead time forecasting and demand forecasting ( to predict lead time demand ) .thus the spectrum of models which have to be investigated in order to quantify and find all causes of the bullwhip effect is very wide .however , these problems do not seem to be easy to solve by providing analytical models alone .chaharsooghi , s.k . ,heydari , j. ( 2010 ) .lt variance or lt mean reduction in supply chain management : which one has a higher impact on sc performance ?_ international journal of production economics _ * 124 * , pp . 475481. chatfield , d.c . ,kim , j.g . , harrison , t.p . , hayya , j.c .the bullwhip effect - impact of stochastic lead time , information quality , and information sharing : a simulation study . _production and operations management _ * 13*(4 ) , pp .340353 .duc , t.t.h . ,luong , h.t . , kim , y.d .( 2008 ) . a measure of the bullwhip effect in supply chains with stochastic lead time ._ the international journal of advanced manufacturing technology _ * 38*(11 - 12 ) , pp. 12011212 .geary , s. , disney , s.m . ,towill , d.r .on bullwhip in supply chains - historical review , present practice and expected future impact ._ international journal production economics _ * 101 * , pp . 218 .li , c. , liu , s. ( 2013 ) . a robust optimization approach to reduce the bullwhip effect of supply chains with vendor order placement lead time delays in an uncertain environment ._ applied mathematical modeling _ * 37 * , pp .707718 .kim , j.g ., chatfield , d. , harrison , t.p . ,hayya , j.c .( 2006 ) . quantifying the bullwhip effect in a supply chain with stochastic lead time . _european journal of operational research _ * 173*(2 ) , pp . 617636 .so , k.c . ,zheng , x. ( 2003 ) .impact of supplier s lead time and forecast demand updating on retailer s order quantity variability in a two - level supply chain ._ international journal of production economics _ * 86 * , pp . 169179 .
|
in this article we want to review the research state on the bullwhip effect in supply chains with stochastic lead times and give a contribution to quantifying the bullwhip effect . we analyze the mo - dels quantifying the bullwhip effect in supply chains with stochastic lead times and find advantages and disadvantages of their approaches to the bullwhip problem . using real data we confirm that real lead times are stochastic and can be modeled by a sequence of independent identically distributed random variables . moreover we modify a model where stochastic lead times and lead time demand forecasting are considered and give an analytical expression for the bullwhip effect measure which indicates that the distribution of a lead time and the delay parameter of the lead time demand prediction are the main factors of the bullwhip phenomenon . moreover we analyze a recent paper of michna and nielsen adding simulation results .
|
it is well known that quantum mechanics does not allow to predict the result of individual measurements but only the probabilities associated to each of the possible results .the origin of this randomness has been be subject to much debate since the early days of the quantum theory .whereas no completely satisfactory solution to this dilemma is currently accepted , there are some results which assert that a theory explaining the statistical nature of the quantum mechanics should present some `` weird '' properties . for example , because of the violations of bell s inequalities , it is commonly argued that it not possible to find a theory `` completing '' the quantum mechanics keeping the causal structure of the einstein s relativity ( nonlocality ) .another result originally due to kocher and specker ( ks ) claims that in such a more fundamental theory the value of some observable can not depend only on the state of the system , but also on the particular apparatus used to measure it ( contextuality ) .more explicitly , this kind of arguments rely on the following hypotheses : 1 . _realism_. without entering into very philosophical issues , and quite focused on our context , realism is essentially the belief there exist a physical reality independent of observers and measurement processes . in other words ,the properties of physical systems have predefined values before its measurement , or even in the absence of it ; measurements just reveal these values ._ noncontextuality_. it asserts that the predefined values assumed to exist by realism are independent of how the observer manages to measure them . in particular , in the framework of the quantum mechanics , it implies that the value of a quantum observable is independent of which other compatible observables are measured along with it .locality_. it is the statement that the value of an observable measured at some point of the spacetime does not depend on whether or not another observable is measured in another point causally disconnected ( i.e. space - like separated ) from the first one . in practical terms localityis a special case of noncontextuality , as local observables are a particular case of compatible observables .different versions of the bell - ks theorem claim that quantum mechanics can not satisfy 1 and 2 ( quantum contextuality ) or 1 and 3 ( quantum nonlocality ) .however , there is another very fundamental property not satisfied in these results : 1 . _absence of incompatible statistics_. we shall say that a theory presents `` incompatible statistics '' if it is able to assign concrete values ( or bounds ) to the statistics of joint measurements of two or more noncommuting quantum observables .it is the main goal of this work to point out that bell - ks - like arguments do not exclude completely the possible realistic and noncontextual or local character of quantum mechanics .what they exclude is the existence of realistic and noncontextual or local alternative theories accounting for the results of quantum experiments , and assigning values or ( nontrivial ) bounds to statistics of joint measurements of two ( or more ) noncommutative observables ; i.e. not satisfying 4 above .however , if one requires those models to be unable to make statistical statements on a joint measurement of two noncommutative observables [ i.e. assumptions 1&2(or 3)&4 above ] , as the quantum mechanics does , none of them , up to our knowledge , is in contradiction with quantum mechanics . to explain why the presence of `` incompatible statistics '' is tacitly assumed in the bell - ks theorem, we shall focus our discussions on proofs assuming noncontextuality . since locality is a particular case of noncontextualitythere is no loss of generality .specifically , in the subsequent sections we analyze two simple proofs of `` quantum contextuality '' .the arguments given there should convince the reader that a similar line of reasoning can be applied to other more complicated proofs . a comment on the relation of our conclusions with the locality assumption and with proofs involving inequalities shall be given later on .in a recent paper cabello et al . have given a simple proof of quantum contextuality which is specially convenient for our purposes .in this proof , they consider five boxes which can be either empty or full .thus , the notation is adopted to represent the probability that the box is full `` 1 '' , and the box is empty `` 0 '' , and likewise for both boxes full and both empty .then , consider that the boxes are prepared in some state such that the following two equations are satisfied by assuming that the result of finding the boxes empty or full is predetermined and independent of which boxes are opened ( i.e. noncontextuality of results ) it is easy to conclude that however , this implication fails to hold for quantum systems .the counterexample is given by a three - level system in the state .the role of opening the boxes is played by measuring the projectors onto the five states : , , , , and , whereas empty and full are equivalent to obtain the result 0 or 1 , respectively .since the states and are orthogonal , the respective projectors and are compatible , and the joint probability to obtain the result 0 for and 1 for is well defined and given by .the same happens for the other four probabilities in , , and , and we obtain , however , which contradicts .this contradiction seems to indicate that `` the value a quantum observable is not predetermined and independent of which other compatible observables are measured along with it '' . in other words , apparently the quantum mechanics does not satisfy the two aforementioned properties simultaneously .now , after a careful checking , the reader will not have any problem to conclude that if and are satisfied , and the results are predetermined and independent of which boxes are opened , not only the condition is fulfilled but also must be satisfied as well .thus , the conditions and state the impossibility of some results of any pair of boxes .however , in the quantum counterexample only statements about outcomes of the measurement pairs , , , and can be made , for the rest of combinations quantum mechanics can not say anything as the corresponding projectors are not orthogonal .hence , this is an example of noncontextual and realistic model presenting `` incompatible statistics '' when trying to reproduce the results of a quantum experiment .note that , in order to avoid quantum incompatible joint statistics here , we would need to consider a set of five compatible quantum measurements , which means to consider five orthogonal projectors .since the space has dimension 3 , this is not possible .even so , no violations of eq . can be obtained for orthogonal projections because the theory in essentially classical in that case , as there is no noncommuting object involved .therefore , a acuter interpretation of the violation of condition eq .is that `` under the assumption that it is possible to assign concrete values on the results of some incompatible quantum measurements , the values of a quantum observable can not be not predetermined and independent of which other compatible observables are measured along with it '' . in other words , nothing about the contextuality and realism of quantum mechanics seems to be concluded from the violation of eq ., unless we assume that it is possible to assign concrete statistics to incompatible quantum measurements .the joint hypothesis 1&2 is not violated , what is violated is 1&2&4 .it may be thought that this way to understand the violation of eq .is something particular to this model , which might not apply in other proofs of quantum contextuality ; in fact , there are many proofs of this .then , we may pose the following question : is it possible to prove quantum contextuality avoiding predictions involving quantum incompatible observables ?, the answer seems to be negative . to illustrate why that is the case, we shall analyze another proof based on an argument of ks type .in mermin suggested a nice and pedagogical proof of the bell - ks theorem ( see also ) .the proof is formulated with an arrangement of three qubits , so that the total space is eight - dimensional .mermin considers ten observables which , for illustrative purposes , can be sited along the intersection points of a star , fig .the observables only take the values 1 or , and by we denote the ( predetermined value of the ) result of the measurement for the observable at the point labeled by , considered to be independent of which other observables are measured along with it . by hypothesiswe consider that the following equations are satisfied : these correspond to the multiplication of the results of a measurement of the four observables along each non - horizontal line of the star . now , since the measurement results are supposed to be predetermined and independent of which other observables are also measured , we can multiply the four equations and , because , we obtain therefore , we conclude that if eqs . are satisfied , the product of results of a joint measurement of the four observables along the horizonal line of the star has to be 1 .consider now a quantum observable given by the self - adjoint operator .we define a `` valuation '' function that assigns a numerical value to the observable with the aim to be its value before ( and after ) its measurement . so that takes values on the spectrum of .now , there is a very natural condition to be imposed on this function ; if is a set of mutually commuting observables , and some functional relation of the form is satisfied , then =0 ] for a commuting set can be motivated because they share the same spectral basis .for instance , consider that , then is a quantum observable with eigenvalue 0 . because the commutation , the spectral basis of is also the spectral basis of each of and the eigenvalues of are given simply by adding sequences of eigenvalues of .since , by assumption , all of these sums have to be 0 , and the valuation function only takes values on the spectrum of , we conclude that .coming back to eqs . , suppose now that , where the choice of observables is given in fig .all of these observables take values or , and the observables sharing the same line in the star are commuting . precisely from this commutation it is immediate to check that eqs . are satisfied , i.e. the multiplication of the observables on each nonhorizontal line gives the identity , and then the same relation is fulfilled for its predetermined values given the valuation function .therefore , if the values of these quantum observables are predetermined and independent of which other observables are measured along with it , eq .must be also satisfied .however this is not true , because the product of the observables on the horizontal line is , and they are commuting : thus , one could conclude again , as in the previous section , that `` the value a quantum observable is not predetermined and independent of which other compatible observables are measured along with it '' . nevertheless , again, sensu stricto , this conclusion should be stated as `` under the assumption that it is possible to assign concrete values on the results of some incompatible quantum measurements , the values of a quantum observable can not be not predetermined and independent of which other compatible observables are measured along with it '' .the reason for this is given in the arrangement of results in fig .1(c ) . from eqs .and this combination of results is impossible because condition implies whereas condition imposes , so that in the the notation of the previous section , .however , quantumly this combination can not be considered to be impossible or possible with some probability because it would imply to make simultaneous measurements of two or more non - commuting observables , see fig .despite we have analyzed just two versions of the bell - ks theorem ( an analysis of all of them is very far from the scope of this paper ) , we aimed at illustrating sufficiently well why it is not feasible to find a proof of quantum contextuality without the tacit assumption that there exist concrete assigned values for the joint statistics of some quantum incompatible events .namely , the proofs of contextuality are based on several `` test '' equations that must be satisfied for a realistic and noncontextual theory [ e.g. eqs . and eqs . ] . in order to find a contradiction with some of these test equationsit is necessary that the set of quantum measurements includes some incompatible events ( despite that only joint measurements of compatible events are required to check these equations ) , otherwise the situation reduces to an essentially classical theory in the common spectral basis .however , if there are incompatible measurements , by substituting in the test equations some particular values of two or more of these incompatible measurement ( as many as needed ) , it is always possible to find to a situation which violates the test equations but that the quantum mechanics can not falsify because it involves the joint measurement of two or more incompatible observables .thus , the presence of `` incompatible statistics '' can be seen a loophole to escape from the standard conclusions of the bell - ks theorem .it is worth to comment the relation of the previous results with the local realism as tested by bell inequalities .first of all , as the reader might have already noted , the hardy like and mermin s ten observable proof are very convenient as both can be recast to test local realism . with the substitution of the noncontextuality assumption by the locality one , see and for each of them respectively .thus , the apparent violation of the local realism can be again reinterpreted as a byproduct of the tacit assumption that the considered local models assign probabilities to incompatible events in quantum mechanics .a similar thing happens with proofs based on inequalities .for instance , if , , and are random variables taking values , it can be proved that : which is the so - named chsh inequality , and where denotes the average over some ensemble .quantum mechanics violates this inequality for spin observables of two particles , , and , where stands for the pauli matrix in the direction .the observables are always compatible with the observables , but and must be incompatible to obtain a violation of , and the same applies to and . from this fact we can derive a bound from eq . which involves the simultaneous measurement of incompatible observables .to that aim it is convenient to rewrite in terms of probabilities . by expanding the correlations functions and because of the dichotomic character of the results , we obtain the compact expression , where denotes the probability to obtain the same result in the simultaneous measurement of and .thus , eq . reads then , by taking into account that for whatever observables , we can conclude that hence , we have obtained a statement about the probability , , which inevitably requires the simultaneous measurement of noncommuting observables .this inequality explicitly manifests the implicit presence of `` incompatible statistics '' in the chsh inequality . in summary ,arguments with inequalities also presents statement about incompatible statistics .in this work we have analyzed several versions of the bell - ks theorem with the goal of explaining why there exist no conflict with the assumptions of realism and noncontextuality ( or locality ) , unless we assume the existence of assigned values or bounds on joint probabilities of quantum incompatible measurements . from a different point of view, the bell - ks theorem can be reinterpreted as the absence of a theory providing predictions to quantum incompatible measurements respecting the properties of realism and noncontextuality or locality .therefore , if one accepts the belief that quantum incompatible events are indeed incompatible , it does not seem to be any problem to assume quantum mechanics to be realistic and noncontextual ( or local ) , because any trial to find some contradiction seems to inevitably require the assumption that there exist concrete values or bound for joint measurements of incompatible quantum observables .in this regard , although in the present work a mathematical proof precluding a quantum contradiction with realism and noncontextuality while keeping absence of incompatible statistics has not be given , we hope to have provided sufficient arguments to support that such a contradiction , if possible , needs an alternative line of reasoning different from the existent ones .we are grateful to alfredo luis for illuminating discussions . financial support by the spanish mineco grant fis2012 - 33152 and the cam research consortium quitemad+ grant s2013/ice-2801 is acknowledged .see for example : a. peres , j. phys .a : math . gen . * 24 * , l175 ( 1991 ) ; j. r. zimba and r. penrose , stud .* 24 * , 697 ( 1993 ) ; m. kernaghan , j. phys . a : math .gen * 27 * , l829 ( 1994 ) , a. cabello , j. estebaranz and g. garca - alcaine , phys .a * 212 * , 183 ( 1996 ) ; a. cabello and g. garca - alcaine , j. phys .a : math . gen . * 29 * , 1025 ( 1996 ) and references therein . during the completion of this work , we have been aware of the work `` m. ukowski and c. brukner , j. phys .a : math . theor . * 47 * , 424009 ( 2014 ) '' , where the authors points out that the violation of bell s inequalities requires a premise equivalent to the existence of joint probabilities of all conceivable measurements .the result of eq . above provides a nontrivial bound to the value of one of these probabilities .
|
we analyze a possible loophole to the conclusion of the bell - ks theorem that quantum mechanics is not compatible with any realistic and noncontextual or local theory . we emphasize that the models discarded by bell - ks - like arguments possess a property not shared by quantum mechanics , i.e. the capability to make non - trivial statements about the joint statistics of quantum incompatible observables . by ruling out this possibility , apparently nothing seems to prevent from a realistic , noncontextual or local view of quantum mechanics .
|
electromagnetic radiation arising from charged particle motion at scales outside the quantum - mechanical limit can be described completely from maxwell s equations and the distribution of charges and their accelerations .most university courses begin with these fundamental relations , proceed to intermediate results such as the linard - wiechert potentials and/or the larmor formula for the power radiated from an accelerated charge , and use these to derive the properties of classical named radiation processes such as synchrotron , transition , and vavilov - cherenkov radiation where an ( at least semi- ) analytic solution can be found .this approach leads to a greater understanding of both radiation phenomenology , such as relativistic beaming , and of those physical situations where the rather special assumptions on the particle motion required to find an analytic solution apply .the focus on classical named radiation processes however can leave the impression that these are the fundamental mechanisms of electromagnetically radiating systems , whereas they are really short - hand for a type of charged particle acceleration which results in a particular radiation field .because of this focus , there is a tendency among physicists to ascribe the radiation from complex systems to a combination of these classical processes .this tendency can lead to confusion however even when a physical situation deviates only slightly from the classical idealised cases .for instance , consider the titles of the following papers : `` synchrotron radiation of charged particles passing through the boundary between two media '' , `` on the erenkov threshold associated with synchrotron radiation in a dielectric medium '' , and `` erenkov radiation from an electron traveling in a circle through a dielectric medium '' .the titles could equally have referred to ` transition radiation ' , ` vavilov - cherenkov radiation ' , and ` synchrotron radiation ' respectively . indeed ,numerous papers exist which note that the same fundamental physics can explain multiple mechanisms , for instance schwinger , tsai and erber s `` classical and quantum theory of synergic synchrotron - erenkov radiation '' .however , the problem of which mechanism to attribute to what process can be avoided entirely by going ` back to basics ' and formulating a general description of radiation processes according to the well - known mantra `` electromagnetic radiation comes from accelerated charges '' .the goal of this paper is therefore to develop a new methodology the ` endpoint formulation ' by which the calculation of the radiated fields from accelerated particles can be performed completely generally via a method which is intuitively understandable at an undergraduate level . for generality , our formulation must be equally suited to that of a simple system allowing an analytic solution as to a complex one requiring a numerical solution , and allow not just the calculation of the total radiated power , but also the time- and frequency - dependent electric field strengths .we thus proceed as follows . in section [ endpoint_derivation ] , we begin with expressions for the electric field from the well - known linard - wiechert potentials , and use these to develop our formulation , which is based on the radiation from the instantaneous acceleration of a particle from / to rest ( an ` endpoint ' ) . given the radiation from a single endpoint , we describe how to use this to calculate the radiation from an arbitrary complex physical situation . in section[ compare ] , we use the endpoint formulation in specific applications to numerically reproduce the well - known results from idealised classical phenomena such as synchrotron and transition radiation , and tamm s description of vavilov - cherenkov radiation from finite particle tracks .section [ discussion ] discusses the use and range of applicability of our endpoint formulation . in section [ applications ] , we return to the original motivation for this work from the point of view of the authors , and demonstrate how this formulation resolves outstanding questions relating to the calculation of radiation from high - energy particles cascades in the moon and the earth s atmosphere .the explanations also serve to illustrate how the application of classical radiation emission mechanisms to complex physical situations has led to incorrect and misunderstood conclusions .we wish to describe electromagnetic radiation in terms of particle acceleration . rather than writing down a general function for the charged particle distribution and its time derivatives , and then deriving results for specific cases of that charge distribution , here we adopt a ` bottom - up ' approach .thus we begin by describing the radiation from a simple single radiating unit , that being the instantaneous acceleration of a charged particle either to or from rest , and later we will show how to combine such basic units into more complex physical situations .radiation from an instantaneous particle acceleration is known best through larmor s formula and its relativistic generalisation , which gives the total power radiated per unit frequency and solid angle . in the interest of clarity , and because we wish to preserve phase information , we proceed below to re - derive the emitted radiation in terms of the electric - field in the time- and frequency - domains . by beginning with the linard - wiechert potentials only, we hope to emphasise the generality of our result .electric field components due to particle motion and acceleration can be readily separated using the linard - wiechert potentials , which are derived directly from maxwell s equations in the relativistic case , see e.g. jackson . using an arbitrary produces the result for eq .[ lweqn1 ] from zas , halzen , and stanev in the case where the relative permeability , , is unity . ] , and reproduced below : {\rm ret } \nonumber \\\vec{a}(\vec{x},t ) & = & \left [ \frac{e \vec{\beta}}{(1 - n \vec{\beta } \cdot \hat{r } ) } \right]_{\rm ret } \label{lwpot}\end{aligned}\ ] ] where is the distance from the point of emission to an observer , a unit vector in the direction of the observer , ( is the velocity vector of the particle ) , and is the medium refractive index .the subscript ` ret ' denotes evaluation at the retarded time . using eq .[ lwpot ] , it is possible to calculate the total static and vector potentials from a distribution of source charges , by summing the contributions from individual charges in the distribution .these can then be used to calculate the corresponding electric and magnetic fields .this alternative approach is used for instance by alvarez - muiz et al . in the ` zhaires ' code . in our methodologyhowever , we calculate the electric fields directly , since from the linard - wiechert potentials , the electric field in a dielectric , non - magnetic medium due to a particle of charge ( in c.g.s . units ) can be expressed ( see e.g. jackson ) as follows : {\rm ret } \nonumber \\ & + & \frac{q}{c } \left [ \frac{\hat{r } \times [ ( \hat{r } - n \vec{\beta } ) \times \dot{\vec{\beta}}]}{(1 - n \vec{\beta } \cdot \hat{r})^3 r } \right]_{\rm ret } \label{lweqn1}\end{aligned}\ ] ] with the time - derivative of , and the usual relativistic factor of . the first term is the near - field term , since the strength of the resulting fields falls as in the case of , it reduces to coulomb s law .the second term is the radiation term , with the familiar dependence .the well - known maxim ` radiation comes from accelerated charges ' is seen easily by the dependence of this term on . in most practical applications, the near - field term presents only a minor correction to the observed fields . from here on we formulate our methodology purely from the radiated field term only .thus the following expressions for the electric fields will be those arising solely from the particle acceleration . the applicability of this approximation is discussed in sec .[ discussion ] .the most simple acceleration event is the instantaneous acceleration of a particle at rest at time to a velocity , i.e. , or equivalently the deceleration of such a particle from velocity to rest , i.e. .such events can be termed , respectively , ` starting points ' and ` stopping points ' , ` acceleration ' and ` deceleration ' , or ` creation ' and ` destruction ' events .we define the electric field resulting from these events as , where the acceleration vector can be either parallel ( ) or anti - parallel ( ) to the velocity vector , corresponding respectively to acceleration ( at a starting point ) or deceleration ( at a stopping point ) .since changes only in magnitude , we write ( a unit vector ) , and use similar notation for and the time - derivatives : .thus only the scalar components will need to be expressed as functions of time .we proceed to derive from the rhs of eq .[ lweqn2 ] in terms of the ` lab - time ' ( observer time ) in both the time- and frequency - domains .similar derivations in both domains in the case of linear particle tracks ( effectively two endpoints see sec . [ cherenkov ] ) appear also in alvarez - muiz , romero - wolf , and zas , for the case of `` erenkov radiation '' : note that in the following no assumption on the nature of the radiation need to be made .the expression for the radiated component of the electric field ( from eq .[ lweqn1 ] ) for the instantaneous particle acceleration described above is : }{(1 - n \vec{\beta } \cdot \hat{r})^3 r } \right]_{\rm ret } \label{lweqn2}\end{aligned}\ ] ] where we have removed the term from eq .[ lweqn1 ] since .we begin the frequency - domain derivation by taking the fourier - transform of eq .[ lweqn2 ] converted to the retarded time using and : a conceptual and mathematical difficulty to overcome is that at the time of acceleration , , and hence , is undefined .this can be dealt with by letting the acceleration last a finite ( but small ) time interval , then taking the limit as .writing , the acceleration takes place over the interval , during which we have , , and .thus the frequency - domain integral becomes : \right ) \ , { \mathrm{d}t^{\prime \prime}}\end{aligned}\ ] ] this somewhat difficult integral can be greatly simplified by applying the limit , in which case the integral and limit eventually evaluate to the rather simple form : \label{endpoint_eqn_f}\end{aligned}\ ] ] where we have written . recall that the ` ' is positive when the acceleration is parallel to the motion ( acceleration from rest ) , and negative when the acceleration is anti - parallel to the motion ( acceleration to rest ) . for the time - domain derivation, we again consider the radiated component of eq .[ lweqn1 ] .we can calculate the time - integral of the electric field for one starting point or stopping point , taking into account the conversion from retarded emission time to observer - time as and , via : }{(1 - n \vec{\beta } \cdot \hat{r})^3 r } \right]_{\rm ret } { \mathrm{d}t}\nonumber \\ & = & \frac{q}{c } \int_{\delta { t^{\prime } } } \frac{\hat{r } \times [ ( \hat{r } - n \vec{\beta } ) \times \dot{\vec{\beta}}]}{(1 - n \vec{\beta } \cdot \hat{r})^2 r}\ { \mathrm{d}t^{\prime}}\nonumber \\ & = & \frac{q}{c } \int_{{t^{\prime}}_0}^{{t^{\prime}}_1 } \frac{\mathrm{d}}{{\mathrm{d}t^{\prime}}}\left ( \frac{\hat{r } \times [ \hat{r } \times \vec{\beta}]}{(1 - n \vec{\beta } \cdot \hat{r } ) r } \right ) { \mathrm{d}t^{\prime}}\nonumber \\ & = & \pm \frac{q}{c } \left ( \frac{\hat{r } \times [ \hat{r } \times \vec{\beta}^*]}{(1 - n \vec{\beta}^ * \cdot \hat{r } ) r } \right ) \end{aligned}\ ] ] here , denotes the observer - time window corresponding to the retarded - time window , which encompasses the acceleration process . for a starting point ( sign ) , the particle is at rest at the time and has velocity at .the opposite is the case for a stopping point ( sign ) . since the acceleration is instantaneous , the distance from the particle to the observer at the acceleration time is constant , and the time at which an observer would view the radiation emitted at time is given by .the time window in eq .[ time_derivation ] is therefore chosen to satisfy .while the electric field as a function of time becomes infinite in the case of instantaneous acceleration , the time - integrated electric field is finite and independent of the specific choice of .consequently , one can calculate the time - averaged electric field over the time - scale as }{(1 - n \vec{\beta}^ * \cdot \hat{r } ) r } \right).\ ] ] an adequate choice of is dictated by the time resolution of interest .if is chosen significantly longer than the time - scale over which the acceleration process occurs which is in particular the case for the instantaneous acceleration considered here the details of the acceleration process are of no importance . at first glance , the results given in eqs .[ endpoint_eqn_f ] and [ endpoint_eqn_t ] for a radiating endpoint may appear as yet another special case of particle motion with very limited application .however , observe that in arriving at eqs .[ endpoint_eqn_f ] and [ endpoint_eqn_t ] , we have made no assumptions about the macroscopic motion of the particle only that at a given instant , the particle becomes accelerated .as we will see , validating this assumption is really a question of describing the particle motion with sufficient accuracy for the frequency - range / time - resolution of interest , rather than being a limitation of the endpoint approach . in following sections, we will show how arbitrary particle motion can be described in terms of such endpoints .however , before proceeding to more complex situations , it is worthwhile examining the radiation from the most simple acceleration event , a single endpoint .the radiation pattern from a single endpoint is exactly that corresponding to a once - off acceleration event .a relevant physical situation would be the -decay of a heavy element in vacuum , where the motion of the heavy nucleus can be neglected , and the emitted travels with constant velocity to infinity .there are quite a few interesting features of even this simple situation which are worthwhile to explore in greater depth . for most applications ,it is preferable to use the vectorial notation given in eqs .[ endpoint_eqn_f ] and [ endpoint_eqn_t ] to describe the radiation from a single endpoint . however , for a single event , the radiation is cylindrically symmetric about the acceleration / velocity axis , so it is common to express these equations using an observer s position described by a distance and angle to the acceleration vector ( ) .this angular dependence is seen easily from the lhs of fig .[ synch_diagram ] . for this case ,the magnitude of the electric field vector in eqs .[ endpoint_eqn_f ] and [ endpoint_eqn_t ] respectively becomes : and it is taken as given that the unit electric field vector points away from the acceleration axis for and towards it for . at all times the angle is defined to be positive in the direction of positive velocity , irrespective of the acceleration .thus under the transformation , , eqs . [ endpoint_eqn_f ] and [ endpoint_eqn_t ] are invariant , since . to illustrate , eqs .[ sincos_endpoint_eqn_f]/[sincos_endpoint_eqn_t ] have been plotted in a vacuum and dielectric for varying in fig .[ single_endpoint_fig ] .firstly , note that for a single endpoint , the magnitude of the radiation in eqs .[ sincos_endpoint_eqn_f ] , [ sincos_endpoint_eqn_t ] has _ no _ frequency - dependence .this may seem counter - intuitive , since almost all radiation processes become characterised by their particular frequency - dependence .such frequency - dependence can only be produced however by the particle acceleration appearing differently on different wave - length scales , while a point - like acceleration looks identical on all scales , so that the resulting radiation could not possibly have any dependence on the wavelength / frequency . only in the quantum - mechanical ( extremely - high - frequency regime see sec . [ discussion ] )will there be a frequency - dependence in the radiation from a single endpoint , since the particle will no longer appear point - like .secondly , observe that there is a singularity in the emitted electric field about this is the ` cherenkov ' singularity , which occurs at the cherenkov angle . here, the electric field strength becomes undefined .this is , of course , unphysical , since we do not observe infinite electric fields in nature .nonetheless , both eqs .[ sincos_endpoint_eqn_f ] , [ sincos_endpoint_eqn_t ] and reality can happily coexist since an observer will always observe the particle traversing some finite observation angle .writing , the divergent term in eqs .[ sincos_endpoint_eqn_f ] and [ sincos_endpoint_eqn_t ] can be expanded in the vicinity of as follows : the first term on the rhs , which diverges as , is odd about , while the second ( even ) term is finite .therefore , for any real measurement , an integral of the field about will have the divergent component cancel , leaving a finite result .in addition , any real medium will have a frequency - dependent refractive index , so that infinite field strengths will only be observed over an infinitely small bandwidth .finally , note that away from the singularity , there is a broad angular dependence which depends primarily on and .there is no emission in the exact forward direction for any values of and , though for highly - relativistic particles in vacuum , the radiation pattern rises extremely rapidly away from , producing the characteristic forward ` beaming ' expected .also note that for mildly- and sub - relativistic particles , the emitted radiation at all angles changes with the particle energy , whilst in the ultra - relativistic regime , only extremely near to does the radiation pattern change with energy .we have derived both vectorial ( eq . [ endpoint_eqn_f ] and eq .[ endpoint_eqn_t ] , in terms of , , , , and ) and scalar ( eq . [ sincos_endpoint_eqn_f ] and eq .[ sincos_endpoint_eqn_t ] , in terms of , , , and ) equations for the radiation from an endpoint . for the sake of brevity , in this section we use only the scalar notation of eq .[ sincos_endpoint_eqn_f ] and eq .[ sincos_endpoint_eqn_t ] to describe the situation .despite eq . [ sincos_endpoint_eqn_f ] and eq . [ sincos_endpoint_eqn_t ] describing the radiation resulting from a particle accelerating from / to rest , they are , in fact , more general than this .this is because an arbitrary acceleration of a particle can be viewed as a superposition of deceleration and acceleration events , which will not cancel if either , , or differ between the two endpoints . for curved particle motion at constant speed, the angle from the acceleration vector to the observer will be different for coincident endpoints , while for gradually accelerating / decelerating particles , the values of will be different for simultaneous endpoints . in either case , contributions from starting and stopping points will not cancel , and radiation will occur .conversely , if a simple linear motion with constant velocity is described piece - wise as a series of starting and stopping endpoints , the terms will cancel completely the particle will not radiate .superposition of endpoints in this way is sometimes viewed as destroying the ` old ' particle and creating a ` new ' one since this formulation is applicable only to the radiated component , static fields ( which fall as ) can be ignored , so that bringing a particle to rest ( ` stopping ' it ) is equivalent to destroying it , and accelerating a particle from rest ( ` starting ' it ) is equivalent to creating it , and _ vice versa_. an arbitrary change in particle velocity can be dealt with by combining two simultaneous , coincident endpoints , the first to ` stop ' the particle by bringing it from its old velocity to rest , the second to ` start ' the particle by accelerating it to its new velocity .multiple particles / events can be treated by adding the contributions with appropriate , , , , and . in the case of a smoothly - varying velocity , the values used at the endpoints should be representative of the average velocity between endpoints , which will tend towards the true value of as the number of endpoints used becomes large .any propagation effects between the source and the observer e.g. absorption in a medium , or transmission through an interface should be applied to the ( spherically - diverging ) radiation from each endpoint .this can be simply done , since the relevant parameters ( e.g. angle of incidence for transmission ) will be uniquely defined for each such endpoint .note that for ray - tracing methods , the rays will be diverging , and transmission problems should be handled accordingly . to illustrate , we have plotted the emitted radiation in four elementary situations in a vacuum and dielectric in fig .[ vac_di_fig ] : a single endpoint representing an electron accelerating from rest ( ` acceleration ' ) ; the deflection of an energetic electron through ( ` deflection ' ) ; the deceleration of a fast electron ( ` slow - down ' ) ; and a reversal of direction in a mildly relativistic electron with no change in speed ( ` reversal ' ) . note in the three highly relativistic cases the characteristic beaming in the forward direction for the vacuum case , and about the cherenkov angle in the dielectric . for velocity reversal ,significant peaks are observed at the cherenkov angle since , while in the vacuum case , no appreciable beaming is evident and the emission is broad , which is characteristic of ( non - relativistic ) dipole radiation .the simple examples presented in fig . [ vac_di_fig ] and eqs . [ sincos_endpoint_eqn_f ] , [ sincos_endpoint_eqn_t ] themselves deal only with point - like acceleration events , while in most situations particle motion will be smooth ; however , this is not a limitation in practice .every numerical simulation necessarily describes particle motion as a series of uniform motions joined by instantaneous acceleration events , for which either of eqs .[ endpoint_eqn_f ] or [ endpoint_eqn_t ] will calculate the emitted radiation _ exactly_.the key point is that the degree to which the radiation calculated from the addition of endpoint contributions resembles the true radiation is limited only by the degree to which the simulated motion resembles the true motion .usually this means that a particle simulator must be accurate to within a small fraction of the wavelengths of interest for a discussion of this effect in practice , see for example the discussion in fig . 3 and appendix a of ref . or section 3 of ref .it is not a concern of this paper . in our experience of applying the endpoint formalism to complex physical situations , two important cases where the formalism can be mis - applied have come to our attention .we discuss each below .the first case concerns the interpretation of the initial and final endpoints used to describe particle motion .if the initial endpoint is a starting / acceleration endpoint , this models the situation of a particle sitting at rest until suddenly accelerated , in which case the calculated radiation pattern will include a large contribution from this sudden acceleration . therefore , this will be the correct choice if , in the physical situation being modelled , the particle genuinely does begin from rest .if it does not , the large initial contribution will be artificial and incorrect .if , on the other hand , the initial point is a stopping / deceleration endpoint , the implied motion is that of a particle moving with uniform velocity for an infinite time until the time of the initial endpoint . given that such infinite uniform motion does not generally occur in nature , the usual interpretation for this choice is that of a particle beginning the calculation with a non - zero velocity , and that whatever motion it undertook before that point is not of interest to the calculation .the choice of final endpoint , obviously , has similar implications .for instance , in the case of synchrotron ( curvature ) radiation in sec .[ synch_sec ] , the initial point must be a stopping endpoint , and the final point a starting endpoint , since the radiation of interest is only that from the curved motion of the particle .however , for the radiation from a finite particle track in sec .[ cherenkov ] , the situation of interest really is that of a particle which accelerates from and decelerates to rest , and hence the initial and final endpoints are starting and stopping endpoints respectively .the second case involves ensuring that the velocity used at a starting endpoint and a subsequent stopping endpoint is consistent with the implied motion of the particle : if the particle is accelerated at a starting endpoint at and arrives at a stopping endpoint at , then the condition must be satisfied .furthermore , note that must be identical at both the starting and stopping endpoints .if changed between the starting and subsequent stopping point , this would imply an additional acceleration , and the radiation associated with this acceleration would not be accounted for , leading to incorrect results .this problem arises for an accelerating particle when the instantaneous particle positions and ` true ' velocities are known ( or simulated ) at discrete times , so that in general the velocities will not point towards the next known position . in such a case , one must realise that the velocities used in the endpoint treatment are representative of the time - averaged true velocity between endpoints . therefore, the correct treatment is to re - normalise each instantaneous ( in both direction and magnitude ) to the appropriate to fulfill the above - mentioned condition .this will become important in the following section ( sec . [ synch_sec ] ) in the case of synchrotron radiation .it is instructive to recreate classical radiating systems and reproduce the classical results using our endpoint formulation .we do this below for the cases of synchrotron , vavilov - cherenkov , and transition radiation . , the angle from the velocity vector ; ( b ) schematic diagram of the contributions from two terms in the sum of eq . [ synch_eqn ] .[ synch_diagram],title="fig:",scaledwidth=15.0% ] , the angle from the velocity vector ; ( b ) schematic diagram of the contributions from two terms in the sum of eq .[ synch_eqn ] .[ synch_diagram],title="fig:",scaledwidth=30.0% ] synchrotron radiation arises from a relativistic particle undergoing infinite helical motion ( a superposition of circular motion in a 2d plane and linear motion perpendicular to the plane ) in a vacuum ( ) , as is typically induced by the presence of a uniform magnetic field . here, we treat the case of a particle of velocity executing a single circular loop of radius in the plane only in a vacuum .this motion is viewed by an observer at a very large distance so that the unit - vector in the observer diretion can be assumed to remain constant throughout the motion .such motion can be represented by a series of starting and stopping points , schematically in fig .[ synch_diagram ] , and mathematically as follows : where the calculation of each velocity vector , time , and distance to the observer is a matter of simple geometry .while the term can be assumed constant , also changes the relative phase - factors between emission at different endpoints .note that every starting term is balanced at any time by a simultaneous stopping term at the same position .the reason the terms do not cancel is due to the direction of the velocity vectors differing between simultaneous starting and stopping terms .note also that there is only one term with ( a stopping event ) and one with ( a starting event ) .this represents the physical situation of a particle moving in some direction from , executing the loop described , then continuing on to in the original direction , as described in more detail in sec .[ firstlastdiscussion ] .this is necessary so that the radiation modelled is due to the curvature of the particle ( that is , synchrotron radiation ) , rather than any sudden and large initial and final accelerations to bring the particle from / to rest , which would tend to dominate .finally , also note that the magnitude of the vectors used in the calculation must be slightly decreased from the ` true ' value , since the ( straight line ) distance between endpoints is slightly shorter than the distance along a circular arc ; similarly , the vectors will not be quite tangential . the necessity of this is also discussed in sec .[ firstlastdiscussion ] .we present numerical evaluations of eq .[ synch_eqn ] in fig .[ synch_fig ] for a loop of radius m and true velocity ( ) , equivalent to an mev electron moving perpendicular to a gauss field .the observer is assumed to lie in the very - far - field in the plane of the loop .for the plot in the time ( frequency ) domain , we present both a direct calculation , and the results from taking a fourier transform from the calculation in the frequency ( time ) domain .all the characteristic features are reproduced perfectly : a steep spectral fall below the cyclotron frequency ( ) , and a slow rise in power until an exponential cut - off above the critical frequency ghz . in the time - domain , a sharp pulse of characteristic width is seen . that the results calculated by fourier transformdo not exactly match the direct calculations is due to the difficulty in generating sufficient data to make an accurate transform .however , the correspondence is obvious .from here on therefore , we deal only with calculations in the frequency - domain , and take it for granted that one can transmute time - trace data to spectral data and _ vice versa _ accurately and as needed .perhaps the most familiar analytic result on synchrotron radiation is the normalised , angular - integrated power spectrum for ultra - relativistic particles .this makes a useful comparison with our code and , by extension , the endpoint formalism .the common analytic result writes the power spectrum as the product of a normalisation constant and a dimensionless function , where is the ratio of the frequency to the critical synchrotron frequency : {2.25 x^2 } ) .\label{analytic_synch}\end{aligned}\ ] ] here , is a modified bessel function of the second kind .the ultra - relativistic approximation of in deriving this result means that it is only applicable in the frequency regime far above the cyclotron frequency of , i.e. . to compare this analytic result with the endpoint theory calculation, we calculate the radiated far - fields as above , but for all angles relative to the plane of gyration .these fields are then converted to a radiated power and integrated over a solid angle , and plotted against in fig .[ synch_comparison_fig ] here , we do not include this normalisation factor , equal to . ] .estimates using both and endpoints were made , to illustrate when numerical effects become important .the difference between the analytic and endpoint method results is also shown , defined as .the difference in estimates at high frequencies is due to numerical inaccuracy : this clearly reduces as the number of endpoints increases and better describes the curved track . at low frequencies ,the increasing disagreement with the analytic result is due to the ultra - relativistic approximation breaking down , and here measures the limitations in the theory , which will break down completely as . in this paper , we describe physical situations in terms of endpoints , whereas in numerical codes , the physical situation is usually described in terms of particle tracks .such a track - based description defines the total charge distribution in position and time in terms of charges moving from position to position .this implicitly defines a velocity ; given the type of particle , the energy , momentum , and -factor are also defined .if the code outputs the positions ` sufficiently ' accurately , then the implied velocity will also be ` sufficiently ' close to the true velocity at each point .it is obvious that the radiation from such a particle track can be constructed from two endpoint contributions one accelerating / creating a particle at from rest to velocity , the other decelerating / destroying the particle at . in the very - far - field of both events, the corresponding parameters can be written , , , ; we also write .the track - length is often expressed in terms of the time interval via .the entire particle track is considered as being sufficiently far from the observer so that the approximation holds .is approached , since second - order terms in and become important .it is exactly this approximation that removed the vavilov - cherenkov radiation component from tamm s formula for the radiation from a finite particle track viewed at large distances , leaving only the ` bremsstrahlung ' contributions from the endpoints .however , this approximation and hence equation [ particle_track ] is still valid for angles satisfying , where there is essentially no vavilov - cherenkov component . ] again , it is more common to begin with the expression in eq .[ sincos_endpoint_eqn_f ] . recalling , we find the radiation from a particle track to be : note that eq .[ particle_track ] is ( to within a factor of , due to a different definition of the fourier transform ) the ` vavilov - cherenkov radiation formula ' of eq . 12 in zas , halzen , and stanev , with and .that is , although the zhs code is commonly understood to calculate ` the vavilov - cherenkov radiation ' from a cascade in a dense medium , what it actually calculates is simply ` the radiation ' due to particle acceleration .when the particles in the cascade are all travelling in the same direction in a medium with refractive index significantly different from , the radiation just so happens to very closely resemble the classical notion of vavilov - cherenkov radiation .the vavilov - cherenkov condition is plainly evident by letting the phase term that is , the observation angle tends towards the vavilov - cherenkov angle , where in which case eq . [ particle_track ] reduces to : the product , so that the radiated intensity is proportional to the apparent tracklength . thus to a far - field observer near the cherenkov angle ,the radiation seen is consistent with emission per unit tracklength , although our description makes it clear that this is not the case .we conclude our discussion on vavilov - cherenkov radiation by noting that ` true ' vavilov - cherenkov radiation , which is emitted in the absence of particle acceleration in a dielectric medium , can not be dealt with by this methodology .this comes by virtue of the fact we deal only with the ` radiation ' component of the linard - wiechert potentials , whereas contributions from unaccelerated motion must come from the ` nearfield ' term .thus in the classical treatment of vavilov - cherenkov radiation by frank and tamm in which an infinite particle track is assumed , the near - field must also be infinite .thus it should come as no surprise that a radiation - based far - field treatment can not explain this phenomenon . and the associated discussion .[ tr_fig],scaledwidth=45.0% ] transition radiation arises from a relativistic particle transitioning between two media with different refractive indices .the radiated energy was first calculated by ginzburg and frank in the case of a sharp boundary and a particle moving for an infinite time with uniform velocity parallel to the boundary surface normal .j. bray ( private communication ) has noted that transition radiation can be explained as a particle being destroyed on the incoming side of the boundary , then being created on the outgoing side ; this picture is consistent with ginzburg s ` mirror - charge ' explanation of transition radiation , where the radiation in vacuum from a particle entering an medium is explained by the sum of two charge contributions ( real and mirror charges ) which appear to disappear ( or mutually annihilate ) upon reaching the boundary . in terms of endpoint - theory , there exists a starting and a stopping event which are simultaneous and co - located ( or rather , infinitesimally separated ) .the contributions from these events do not cancel because 1 : the events occur in different media , and 2 : an observer must be on one or the other side of the boundary , and thus the radiation from one of the two events will have to be transmitted through the boundary layer . the observed radiation will in fact be the sum of three contributions : a ` direct ' contribution from the event in the observer s medium , a ` reflected ' contribution off the boundary from the event in the observer s medium , and a ` transmitted ' contribution from the event in the non - observer medium .this situation is shown in fig .[ tr_fig ] for an observer in the ` incoming ' medium ( medium ) with refractive index since the separation of the two endpoints is infinitely small , any such observer will be in the far field , and there will be no phase - offset between the three contributions . using the endpoint formulation , the total field in medium is then : the first two terms arise from the particle in medium ` stopping ' at the boundary ; while is the usual direction towards the observer , is the apparent observer direction seen in reflection from the boundary .the third term originates from the ` starting ' event in medium , with the apparent direction of the observer from the perspective of medium . and are reflection and transmission coefficients , which vary according to the geometry defined by and and the relative refractive indices . in all terms , the velocity should be that of the true velocity , since there is no possibility of acceleration between the two endpoints .the ` ' in the latter two terms is a reminder that field directions after reflection / transmission should be used .the field in medium can be expressed by switching and in eq .[ transition_eq ] , and recalculating the vectors and the transmission / reflection coefficients appropriately .this ` three - contribution ' formulation is the same as that noted by ginzburg and tsytovich for the case of a particle normally incident at the boundary between two infinite uniform ( but otherwise arbitrary ) media . in the case of normal incidence to a boundary , the geometry becomes substantially simpler . in this case ,the observer direction is definable by the angle from the surface normal in the observer s medium ( thus ) , and all reflection / transmission will occur in the ` parallel ' direction .note that is distinct from the angle . denoting the refractive index from the observer s medium as , that from the other medium as , and using ( ) to indicate a ` ' ( ` ' ) sign for observations in medium ( the incoming medium ) and a ` ' ( ` ' ) sign for observations in medium ( the outgoing medium ) , the scalar component of eq .[ transition_eq ] can be written thus : where observe that the reflection and transmission coefficients are those appropriate for a point source in the case of , this is the same as that for a plane wave , while for transmission , the plane - wave result gets multiplied by ( this can be thought of as accounting for the change in divergence of rays upon transmission ) .note that the distance to the observer is constant for all contributions , since the two endpoints are infinitely close together , and that there is no explicit relative phase since the event times are also simultaneous ( implicit phases can and do arise however , as we will see below ) .for this situation , the radiated spectral energy density predicted by the endpoint formalism becomes : where the fore - factor of accounts for the energy radiated at negative frequencies .the frequency - dependence of the rhs of eq .[ sed_eq ] is contained in the implicit frequency - dependence of and .for the same physical situation , ginzburg and tsytovich define medium by its relative permittivity and permeability , and medium via . using the same notation for and as for eq .[ tr_ep_eq2 ] , and using for , the values in the observer s medium and for , the values for the other ( non - observer ) medium , these authors derive the radiated spectral energy distribution ( for angular frequency ) as : [ 1 \pm \frac{v}{c } \sqrt{\epsilon^{\dag } \mu^{\dag } - \epsilon \mu \sin^2 \xi}]\right|^2}. \nonumber\end{aligned}\ ] ] an alternative expression eq .2.45e of ref . is also given by ginzburg and tsytovich , which is suggestively close in form to our eq .[ tr_ep_eq2 ] potentials '' , thanking v. v. kocharovskii and vl .v. kocharovskii for this point .since one or more endpoints imply a sudden jump in the potentials , it is likely that the two derivations are very similar . ] . to compare the result given by the endpoint formulation ( eqs .[ expanded_ep_tr_eq]-[sed_eq ] ) with that of ginzburg and tsytovich s eq . [ gb_tr_eq ] , we plot both results in fig .[ sed_fig ] for the case ( i.e. , ) . analysing fig. [ sed_fig ] , we observe that the loci overlap completely the endpoint formulation produces exactly the same result as ginzburg and tsytovich s ( nine - page ) derivation .the detailed angular - dependence of the spectrum arises from the angular - dependence of the three individual endpoint contributions , the behaviour of the reflection and transmission coefficients , and the interference between terms , which can be constructive or destructive . to illustrate , we plot in fig .[ component_fig ] the spectral energy density which would result from considering only one of the three terms direct , transmitted , or reflected in eq .[ transition_eq ] , as would be calculated by substituting the field contribution from that term only directly into eq .[ sed_eq ] .regions of both constructive and destructive interference appear , and there are regimes in which each term dominates . in particular , note that it is the direct contribution which causes the large peak about the cherenkov angle in medium ( the cherenkov condition is not met in medium ) .this is another example where a peak at the cherenkov angle does not imply the existence of ` true ' vavilov - cherenkov radiation ( see discussion in sec .[ cherenkov ] ) .interestingly , while the endpoint from which each contribution is derived changes across the boundary ( e.g. the direct contribution comes from the stopping endpoint in medium , and from the starting endpoint in medium ) , the magnitude of each component is in fact continuous across the boundary .a complication which bears discussing is that in the regime of medium , the ` incident angle ' of the transmitted component can not be defined , since all incident angles from medium map to the range in medium .this is why , in eqs .[ tr_ep_eq2 ] and [ coef_eq ] , we have not used snell s law to define some nor : in the regime in medium , would be greater than one , and purely imaginary , which does not make intuitive sense .the resolution to this dilemma is that the reflection and transmission coefficients result from solving continuity equations for the fields across a boundary .solutions to these equations exist even when the solution is not that for incoming and outgoing plane waves in which snell s law is usually defined .it is important in the endpoint formulation therefore to allow both reflection and transmission coefficients , and the contributions from individual endpoints , to be complex numbers .this is especially true when the refractive indices themselves can not be treated as purely real , as is the case for many applications of transition radiation on metallic targets .finally , note that any and all frequency - dependence must come from the frequency - dependence of the refractive indices .the endpoint formalism described above provides a simple , accurate , and intuitive method for calculating the radiation resulting from particle acceleration . using it ,the radiated electromagnetic fields due to particle acceleration can be calculated in either the time- or frequency - domain for arbitrary particle motion .the domain in which to perform an endpoint - calculation should be the same as that of the desired result .while the process of fast - fourier transforming between time- and frequency - domains is usually relatively quick , such transforms require an adequate number of points in the first domain to produce an accurate result in the second .this is especially true when a signal is localised in one domain ( e.g. a short time - domain pulse , or a narrow - bandwidth signal ) , since then it will necessarily be spread over a great range in the other .usually , if both the time- and frequency - domains are of interest , it will be computationally quicker to perform two direct calculations than to generate excess data points in one domain and use a fast - fourier transform to convert to the other .such was the case for the example of synchrotron radiation presented in sec .[ synch_sec ] .the only exception to this rule is when dispersive effects ( changing refractive index with frequency ) become important , in which case frequency - domain calculations would be more practical . like any method using a distribution of sources, the accuracy of the endpoint method will reflect the accuracy with which the distribution of endpoints reflects the true particle motion on scales of the smallest wavelength / highest time resolution of interest . with reasonable awareness of these issueshowever , our endpoint methodology can be used to calculate the radiation in some very complex physical situations , such as those described in sec .[ applications ] . in emphasising the utility of the endpoint formulation, we should also mention its limitations , the most obvious of which is its classical foundation in maxwell s equations : it breaks down in any quantum - mechanical limit . specifically, it can not treat radiation processes involving only a single photon , nor the radiation of extremely energetic photons where the wavelength is of the order of the de broglie wavelength of the radiating particle(s ) .such limitations however are common to all classical methods of treating radiation and are not increased by our approach .the second limitation is that we have ignored the ` nearfield ' term from eq .[ lweqn1 ] .this does _ not _ mean that our formulation can not calculate radiation in the near - field of a source distribution .since each endpoint is point - like , any observer is always in the far - field of any particular endpoint .thus a near - field calculation requires only taking the trouble to re - calculate the direction to the observer from each endpoint individually . only in certain special circumstances , such as the case of vavilov - cherenkov radiation from non - accelerated systems as discussed in sec .[ cherenkov ] , will the near - field term provide a significant contribution to the observed electric fields .in general , this nearfield term will only become important when a large part of the charge distribution passes very close to the detectors , and for most experiments it will represent at most a minor correction only . our last note is to emphasise that this paper is by no means the first to use ( explicitly or implicitly ) an endpoint - like treatment to solve for various radiation processes .the best - known endpoint - based treatment is the larmor formula for the power radiated by an accelerated charge , which is commonly used in derivations of other radiation processes ( again , see jackson ) .also , as discussed in the introduction , there are numerous examples in the literature where multiple classical radiation processes have been described using the same fundamental underlying physics .what we have done here is to explicitly state that _ all _ radiation from particle acceleration can be described in terms of a superposition of instantaneous accelerations ( endpoints ) , and give a general methodology for applying this method to an arbitrary problem .strong pulses of coherent radiation are expected from extremely high energy ( shower energy ev ) particle cascades in dense media .the mechanism for producing the radiation is the askaryan effect , whereby a total negative charge excess arises from the entrainment of medium electrons through e.g. compton scattering , and the loss of cascade positrons via annihilation in flight .the radiation from the excess electrons travelling super - luminally through the medium will be coherent at wavelengths larger than the physical size of the cascade .the emission is the basis of the ` lunar technique ' , a detection method by which the radiation is observed from ground - based radio - telescopes .several current experiments utilise the technique , which has been proposed to detect both cosmic - ray and neutrino interactions .the emission from the askaryan effect is considered to be coherent vavilov - cherenkov radiation , since this is the mode of radiation upon which askaryan placed greatest emphasis in his papers , and the emission occurs in the case of charged particles moving super - luminally in a dielectric .if indeed this is the case , this would then lead to a formation - zone suppression of the radiation from near - surface cascades , such as those produced by cosmic rays , which interact immediately upon hitting the moon .the reasoning is as follows : consider such a near - surface cascade , induced by a cosmic - ray interaction near the regolith ( dielectric)-vacuum boundary .it has been both predicted and observed ( qualitatively by danos _et al . _ , and as a coherent pulse from single electron bunches by takahashi _et al . _ ) that charged particles moving in a vacuum near a dielectric boundary generate vavilov - cherenkov radiation in the dielectric .analogously , as the distance between the cascade and the surface tends to zero , the radiation emitted into the vacuum will approach that from a cascade in the vacuum itself ( the ` formation - zone ' effect , first considered in this context by gorham _et al . _if indeed the emission from the askaryan effect is vavilov - cherenkov radiation , which is generated by the passage of a particle through a dielectric , then since vacuum is not a dielectric , the emitted power into the vacuum from cascades nearing the boundary must tend towards zero .thus the askaryan emission from cosmic rays will be highly suppressed if the emission is from vavilov - cherenkov radiation . however , the emission from the askaryan effect is not in general vavilov - cherenkov radiation , because it arises from finite particle tracks viewed at a large distance .the emission is therefore of the same character as that produced in the ` tamm problem ' of calculating the ` vavilov - cherenkov radiation ' from a finite particle track in a dielectric medium when viewed at a large distance .however , it has been pointed out by zrelov and ruika that tamm s 1939 result for the radiation in such a problem originates from the acceleration / deceleration at the beginning and end of the track .thus neither tamm s approximate result , nor the majority of the radiation emitted , is truly vavilov - cherenkov radiation .given that the radiation from the askaryan effect arises from the coherent superposition of radiation from many finite particle tracks , the majority of detectable radiation produced by the askaryan effect ( in a dense g/ medium ) itself is not coherent vavilov - cherenkov radiation at all , but rather coherent radiation from particle acceleration .macroscopically , the coherent radiation from microscopic accelerations and decelerations is correctly viewed as coming from the time - variation of net charge .note that a shock wave at the cherenkov angle will still be observed ( this is also seen in transition radiation , as previously discussed in sec . [ tr_section ] ) . while askaryan also mentioned the possibility of coherent transition radiation and ` bremsstrahlung ' ( radiation from particle acceleration ) in his first ( 1961 ) paper , the majority of the paper refers to vavilov - cherenkov radiation and coherent radiation from a moving charge excess , rather than to an accelerated charge excess .thus the primary reason why the zero - emission argument is incorrect is that the radiation due to the askaryan effect does not resemble what is commonly considered to be ` true ' vavilov - cherenkov radiation as described in the frank - tamm picture , which presents a negligible contribution in the askaryan effect and indeed will tend towards zero as a particle cascade develops closer to a vacuum - interface . instead of the above arguments, the use of our endpoint formulation much more easily resolves the issue : charged particles are accelerated and decelerated , ergo , the system radiates , and the effect of the nearby surface from the point of view of a far - field observer is no worse or more profound than for any transmission problem .a second example is the case of radio emission from extensive air showers . when an energetic primary particle interacts in the upper atmosphere , it produces a cascade of secondary particles which can reach ground level .radiation at frequencies of a few tens to a few hundreds of mhz from the electron / positron component of these cascades has been both predicted and observed , with up until recently only fair agreement between predictions and measurements . the emitted radiation is often understood in terms of one or more classical radiation mechanisms , and it can be unclear as to what extent the mechanisms are different ways of explaining the same phenomena , or are truly separate effects . the transverse current model describes the effect of the magnetic field as causing a macroscopic flow of charge ( a transverse current ) due to the different drift directions of electrons and positrons .the time - variation of this current in the course of the air shower evolution produces radiation polarized in the direction denoted by the lorentz force .a modern implementation of the transverse current model , the mgmr model , complements the transverse current emission with additional radiation components , in particular the emission from a relativistically moving dipole and from a time - varying charge excess , the latter of which essentially corresponds to the askaryan effect .difficulties can , however , arise in the separation of these phenomenological `` mechanisms '' . for example, a component similar to vavilov - cherenkov radiation would appear even in case of charge - neutral particle showers in the presence of a magnetic field , because the magnetic field would induce a sufficient spatial separation of the positive and negative charges for the electron and positron contributions not to cancel at the frequencies of interest .in contrast to such macroscopic descriptions , microscopic monte carlo models calculate the radio emission as a superposition of radiation from individual charges being deflected in the geomagnetic field .this approach , although originally inspired by the notion of ` geosynchrotron ' radiation , does in fact not need the assumption of any specific emission mechanism .monte carlo codes for the calculation of radio emission from extensive air showers have been realized by different authors in various time - domain implementations .it recently turned out , however , that all of these implementations ( and others ) in fact treated the emission physics inconsistently , thereby neglecting radio emission produced by the variation of the number of charged particles during the course of the air shower evolution . with the endpoint formalism described here, a new and fully consistent implementation of a microscopic modelling approach has been realized in the reas3 code .the universality of the endpoint formalism ensures that the radio emission from the motion of the charged particles is predicted in all of its complexity . in case of the reas3 code ,this becomes evident since both the `` transverse current '' radiation polarized in the direction of the lorentz force and the radially polarized `` charge excess '' emission is reproduced automatically and in good agreement with macroscopic calculations .the importance of a consistent treatment taking into account also the radiation due to the variation of the number of charged particles within an air shower is obvious when comparing results obtained with reas3 and results obtained by the former implementation in reas2.59 .this is illustrated for a specific example in fig .[ reas3_vs_reas2_pulse ] for the radio emission received by an observer 200 m south of the core of a vertical extensive air shower with a primary particle energy of .the pulse shapes , the pulse amplitudes and the frequency spectra differ significantly . for a detailed comparison of reas3 and reas2we kindly refer the reader to .the next step in improving the simulations will be the inclusion of the refractive index of the atmosphere , which is slightly different from unity and varies with atmospheric density .we have presented an ` endpoint ' methodology for modelling the electromagnetic radiation produced by the acceleration of charged particles .the approach is universally applicable and is especially well - suited for numerical implementation .its universality has been illustrated by reproducing prototypical radiation processes such as synchrotron radiation , tamm s description of vavilov - cherenkov radiation , and transition radiation in the frequency- and time - domains .the method s true strength , however , lies in modelling more complex ( in other words `` realistic '' ) situations in which such individual prototypical radiation mechanisms can no longer be easily disentangled . as demonstrated in sec .[ applications ] , the ` endpoint ' methodology can for example be used to solve outstanding problems in the field of high - energy particle astrophysics . in conclusion, we would like to point out that we believe the ` endpoint ' approach to be an important way of viewing radiation processes which is useful at both an undergraduate student level , and also for career researchers .except perhaps for those researchers working constantly with fundamental electromagnetic theory , we hope this methodology will increase the reader s understanding of radiative processes by providing a simple and unified approach .sokolov , a.kh .mussa , and yu.g .pavlenko , sov.phys.j . * 20 * , 599 ( 1977 ) .pogorzelski , c. yeh , and k.f . casey , j.app.phys . * 45 * , 5251 ( 1974 ) .a. erteza and j.j .newman , j.app.phys . * 33 * , 1864 ( 1962 ) .j. schwinger , w. tsai , and t. erber , annals of physics * 96 * , 303 ( 1976 ) .james , r.d .ekers , j. lvarez - muiz , j.d .bray , r.a .mcfadden , c.j .phillips , r.j .protheroe , and p. roberts , phys .d * 81 * , 042003 ( 2010 ) .o. scholten _ et al ._ , phs.rev.lett .* 103 * , 191301 ( 2009 ) .t. jaeger , r.l .mutel , and k.g .gayley , astropart . phys . * 34 * , 293 ( 2010 ) .m. danos , s. geschwind , h. lashinsky , and a. van trier , phys .rev . * 92 * , 828 ( 1953 ) .t. takahashi , y. shibata , k. ishi , m. ikezawa , m. oyamada , and y. kondo , phys .e * 62 * , 8606 ( 2000 ) .gorham , k.m .liewer , c.j .naudet , d.p .saltzberg , and d.r .williams , arxiv : astro - ph/0102435 ( 2001 ) .t. huege , m. ludwig , o. scholten and k.d .devries _ the convergence of eas radio emission models and a detailed comparison of reas3 and mgmr simulations _ , presented at ` acousitc and radio eev neutrino detection activities ' ( arena ) , nantes , france ( 2010 ) , nucl .a in press , doi:10.1016/j.nima.2010.11.041
|
we present a new methodology for calculating the electromagnetic radiation from accelerated charged particles . our formulation the ` endpoint formulation ' combines numerous results developed in the literature in relation to radiation arising from particle acceleration using a complete , and completely general , treatment . we do this by describing particle motion via a series of discrete , instantaneous acceleration events , or ` endpoints ' , with each such event being treated as a source of emission . this method implicitly allows for particle creation / destruction , and is suited to direct numerical implementation in either the time- or frequency - domains . in this paper , we demonstrate the complete generality of our method for calculating the radiated field from charged particle acceleration , and show how it reduces to the classical named radiation processes such as synchrotron , tamm s description of vavilov - cherenkov , and transition radiation under appropriate limits . using this formulation , we are immediately able to answer outstanding questions regarding the phenomenology of radio emission from ultra - high - energy particle interactions in both the earth s atmosphere and the moon . in particular , our formulation makes it apparent that the dominant emission component of the askaryan effect ( coherent radio - wave radiation from high - energy particle cascades in dense media ) comes from coherent ` bremsstrahlung ' from particle acceleration , rather than coherent vavilov - cherenkov radiation .
|
[ chp3intro ] a common technique for eliciting consumption in studies of substance abuse is the time - line follow - back ( tlfb ) method , in which one asks subjects to report daily consumption retrospectively over the preceding week , month or other designated period . in smoking cessation research , for example, tlfb is one important method for measuring cigarette consumption and defining periods of quit and lapse .although tlfb is a practical approach to quantifying average smoking behavior [ ] , tlfb data can harbor substantial errors as measures of daily consumption [ ] .tlfb questionnaires request exact daily cigarette counts , which smokers are unlikely to remember , particularly after several days have passed .moreover , some smokers may understate consumption to avoid the social stigma attached to excessive smoking or an inability to quit [ ] .thus , smoking cessation studies typically require validation of tlfb reports of zero consumption by biochemical measurement of exhaled carbon monoxide or nicotine metabolites from saliva or blood .a second concern is that histograms of tlfb - derived daily cigarette counts commonly exhibit spikes at multiples of 20 , 10 or even 5 cigarettes .this phenomenon , known as `` digit preference '' or `` heaping , '' is thought to reflect a tendency to report consumption in terms of packs ( each pack in the us contains 20 cigarettes ) or half or quarter packs .the heaps presumably arise because many smokers do not remember precisely how many cigarettes they smoked and therefore report their count rounded off to a nearby convenient number .it has also been hypothesized that some smokers consume exactly an integral number of packs per day as a self - rationing strategy [ ] , but evidence so far suggests that such behavior , if it exists , causes only a small fraction of the observed heaping [ ] .indeed , observed that the distribution of biochemical residues of smoking is smooth , suggesting that heaping is a phenomenon of reporting rather than consumption .recall bias and heaping bias in self - reported longitudinal cigarette counts potentially affect estimates of both means and treatment effects .moreover , heaping may lead to underestimation of within - subject variability , thanks to smokers who regularly report one pack rather than a precise count that varies around some mean in the vicinity of 20 . if a large enough fraction of subjects in a study are of this kind , estimates of both within - subject and between - subject variability can be distorted .although there has been substantial research on statistical modeling of heaping and digit preference in a range of disciplines [ heitjan and rubin ( ) , , , , , , , and ] , the only such application in smoking cessation research is that of , who described a latent - variable rounding model for heaped univariate tlfb cigarette count data .they postulated that the reported cigarette count is a function of the unobserved true count and a latent heaping behavior variable .the latter can take one of four values , representing exact reporting , rounding to the nearest 5 , rounding to the nearest 10 , and rounding to the nearest 20 .except for `` exact '' reporters ( i.e. , those who report counts not divisible by 5 ) , one obtains at best partial information on the true count and the heaping behavior .they analyzed univariate count data from a smoking cessation clinical trial , assuming a zero - inflated negative binomial distribution for the true underlying counts together with an ordered categorical logistic selection model for heaping behavior given true count .the analysis of has three important limitations : first , they included only data from the last day of eight weeks of treatment , ignoring the 55 preceding days .second , they assumed without empirical verification that reported counts not divisible by 5 were accurate .and third , they assumed that the preference for counts ending in 0 or 5 actually represented rounding rather than some other form of reporting error .that is , a declared count of 20 cigarettes was taken to mean that the true count was somewhere between 10 and 30 cigarettes , and was merely misreported as 20 . in the absence of more accurate data on the true , underlying count ,attempts to model heaping must rely on some such assumptions .precise assessment of smoking behavior has taken on increasing importance as researchers explore the value of reducing consumption as a way to lessen the harms of smoking [ , ] and to improve the chance of ultimately quitting [ , ] .the advent of the inexpensive hand - held electronic diary ( ed ) that allows the instantaneous recording of _ ad libitum _ smoking has created the possibility of making much more accurate measurements .such evaluation is an instance of _ ecological momentary assessment _ [ ema ; ] , in that it generates records of events logged as they occur in real - life settings . in , researchers asked 236 participants in a smoking cessation study to use a specially programmed ed to record each cigarette as it was smoked over a 16-day pre - quit period ; moreover , the ed periodically prompted the smokers to record any cigarettes they had missed . at days 3 , 8 and 15, subjects visited the clinic to complete a tlfb assessment of daily smoking since the preceding visit ( 2 , 5 or 7 days previously ) , stating how many cigarettes they had smoked each day .the study found that while the tlfb data contained the expected heaps at multiples of 10 and 20 , the ema data had practically none .average smoking rates from the two methods were moderately correlated ( ) , but the within - subject correlation of daily consumption between tlfb and ema was modest ( ) .self - report tlfb consumption was on average higher than ema ( by 2.5 cigarettes ) , but on 32% of days , subjects recorded more cigarettes by ema than they later recalled by tlfb .these data provide us with an opportunity unprecedented , so far as we know to study the relationship between self - reports of daily cigarette consumption by tlfb and ema . to describe this relationship, we develop a statistical model with two components : the first is a regression that predicts the patient s notional `` remembered '' cigarette count ( a latent factor ) from the ema count .the second is a regression that predicts the rounding behavior described as in with an ordinal logistic regression from the remembered count and fully observed predictors .the models include random subject effects that describe the propensities of the subjects to mis - remember their actual consumption ( in the first component ) and to report the remembered consumption with a characteristic degree of accuracy ( in the second ) . assuming that ema represents the true count, the first component of the model allows us to examine the recall bias resulting from mis - remembering , while the second component describes the heaped reporting errors .[ method1 ] let denote the observed heaped tlfb consumption for subject on day , , , and let denote the vector of tlfb data for subject .let be the ema consumption on subject , day , and let be the vector of ema data for subject .we furthermore let be a vector of baseline predictors for subject , with representing predictors of recall and predictors of heaping . these predictor sets may overlap .the first part of our model assumes that for each day and subject there is a notional remembered cigarette count , denoted [ .we assume is distributed as poisson conditionally on a random effect , the ema smoking pattern and the covariate vector , with mean the parameters and represent the effects of ema consumption and baseline predictors , respectively , on the latent remembered count . the random effect , which we assume normally distributed with mean and variance , represents heterogeneity among subjects .we note that there are no values of in the shiffman data , which are from a pre - quit study in which subjects were encouraged to smoke as normal .thus , we can include as a predictor . in more general contexts where ema counts are possible , one can adjust the model in simple ways to avoid this problem .moreover , when excessive counts occur in the tlfb data , one can fit a zero - inflated count model , as in , for the remembered count .following , we assume that a latent rounding indicator [ dictates the degree of rounding to be applied to the notional remembered count . specifically , we let take one of four possible values : implies reporting the exact count , implies rounding to the nearest multiple of 5 , implies rounding to the nearest multiple of 10 , and implies rounding to the nearest multiple of 20 .we assume that the probability distribution of the heaping indicator depends on , a subject - level random effect that is independent of , and a baseline predictor vector .specifically , we propose the following proportional odds model for the conditional distribution of : here , and is the inverse logit function . the parameters refer to the successive intercepts of the logistic regressions , refers to its slope with respect to the remembered count , and refers to its slopes with respect to the vector of heaping predictors .the random effect describes between - subject differences in heaping propensity not otherwise accounted for in the model . as in , the model links the observed to the latent and via the coarsening function : for example , at time , subject with and reports , whereas , , and .figure [ heapdiag3 ] illustrates this heaping mechanism . as a function of the underlying count and the rounding behavior . ] a coarsened outcome may arise from possibly several pairs .we denote the set of such pairs as .for example , a reported consumption of may represent a precise unrounded value [ or rounding across a range of nearby values [ . for subject , the probability of the observed at time is the sum of the probabilities of the pairs that would give rise to it .the density of reported consumption given the random effects can therefore be expressed as we estimate the model by a bayesian approach that employs importance sampling [ , ] to avoid iterative simulation of parameters .the steps are as follows : we first compute the posterior mode and information using a quasi - newton method with finite - difference derivatives [ ] .we then approximate the posterior with a multivariate density with mean equal to the posterior mode and dispersion equal to the inverse of the posterior information matrix at the mode .next , we draw a large number ( 4000 ) of samples from this proposal distribution , at each draw computing the importance ratio of the true posterior density to the proposal density .we then use sampling - importance resampling ( sir ) to improve the approximation of the posterior [ ] .we evaluate posterior moments by averaging functions of the simulated parameter draws with the importance ratios as weights .the choice of a with a small number of degrees of freedom as the importance density is intended to balance the convergence of the mc integrals and the efficiency of the simulation . letting , the likelihood contribution from subject is \\[-8pt ] & & \hspace*{100pt}{}\times f(b_i)f(u_i)\ , db_i \,du_i;\nonumber\end{aligned}\ ] ] we approximate the integral in ( [ marginallik ] ) by gaussian quadrature . we choose proper but vague priors for the parameters , which we assume are a priori independent ( except for , as noted below ) .the parameter in the poisson mixed model ( [ recall ] ) , representing the slope of the latent recall on the ema recorded consumption , is given a normal prior , whereas the priors of the other regression parameters in both model parts are set to subject to the constraint .we assign the random - effect variances inverse - gamma priors with mean and sd both equal to 1 , a reasonably vague specification [ ] .we obtain the posterior mode and information using sas proc nlmixed , and implement bayesian importance sampling in ` r ` .[ modelchecking ] with heaped data , the unavailability of simple graphical diagnostics such as residual plots complicates model evaluation .we therefore resort to examination of repeated draws of latent quantities from their posterior distributions , in the spirit of bayesian posterior predictive checks [ , , ] .specifically , we evaluate the adequacy of model assumptions using imputed values of the latent recall , which we compare to its implied marginal distribution under the model .imputations of latent and are ultimately based on the posterior density of the model parameter given the observed data . , sampling univariate values , used an acceptance - rejection procedure to draw quantities analogous to our and from a confined bivariate normal distribution . in our model , the correlation within and vectors poses a challenge to simulation .note , however , that given the subject - specific effects and , the components of and are independent .thus , we can readily simulate from the joint posterior of .for each simulated and the observed data , the posterior distribution of is because the values of and together determine , we have that where is an indicator function .accordingly , thus , given random effects and , the imputation of is obtained by independent draws of , , which can be implemented as an acceptance - rejection procedure .we therefore impute the data as follows : make independent draws , from by sir .given , for , independently draw and .for , given and , for , draw as poisson with mean ( [ recall ] ) . then given , and ,draw misreporting type from ( [ misreport ] ) . if , discard and repeat this step until . to assess model fit , we plot histograms of the imputed latent count .implausible patterns in these histograms , such as peaks or troughs at multiples of 5 , suggest incorrect modeling of the heaping .we can also base discrepancy diagnostics specifically on the fractions of reported consumptions that are divisible by 5 .to examine the performance of our approach , we conducted simulations replicating the structure of the shiffman data with nonvisit - day observations per subject .each data set consisted of subjects , and for simplicity we do not consider baseline covariates. for each subject we first set as an observed ema count vector from the data and generated a random effect .we then generated values as independent poisson deviates with conditional mean ( [ recall ] ) . with , , when and ema count , the mean latent recall is 23.2 , and when it is 25.8 . with the random effect distributed as designated above , the marginal mean recalls for and are 24.3 and 27.0 , respectively .next we generated the latent heaping behavior indicator from ( [ misreport ] ) .we set the parameters to their estimates from the shiffman data : the intercepts , , were , and , respectively , and the slope was .we simulated the random effect . under thissetting , when and , the probability of exact reporting is 28.3% , and the probabilities of rounding to the nearest multiples of 5 , 10 and 20 are 66.3% , 5.4% and 0.04% , respectively . when the latent count , these probabilities are 7.8% , 71.2% , 20.8% and 0.2% , respectively .the simulated latent and determined as illustrated in figure [ heapdiag3 ] .these parameter values allow for considerable discrepancy between remembered and recorded consumption . to examine our methods when the latent recall and ema match more closely , we conducted a second simulation under parameter values that gave better agreement . in this scenario, we assumed and with .thus , when , the expected precise recall , and the marginal mean recalls are 20.5 and 30.8 for ema counts of 20 and 30 , respectively .we set the parameters in the heaping behavior models at , , and for , , and , respectively , and . in this case , when , the probabilities of reporting exactly and to the nearest multiples of 5 , 10 and 20 for a true count of 22 are 29.6% , 62.3% , 7.1% and 1% , respectively ..3d2.3cd2.3cc@ & & & & & & + * parameter * & & & & & & + + latent recall + &2.36&2.36&0.07&0.002&0.07&95 + &0.26&0.26&0.02&0.001&0.02&93 + &0.30&0.30&0.02&0.001&0.02&95 + heaping behavior + &-1.49&-1.53&0.56&-0.04&0.56&94 + &-5.28&-5.31&0.66&-0.03&0.66&98 + &-10.14&-9.99&2.55&0.15&2.54&80 + &0.11&0.11&0.02&0.002&0.02&96 + &2.67&2.61&0.29&-0.06&0.29&98 + + latent recall + &0.0&-0.01&0.09&-0.01&0.09&94 + &1.0&1.00&0.03&0.005&0.03&94 + &0.22&0.22&0.02&-0.001&0.02&97 + heaping behavior + &-1.07&-1.08&0.43&-0.007&0.43&98 + &-4.37&-4.36&0.60&0.007&0.59&94 + &-6.52&-6.43&0.66&0.09&0.67&94 + &0.088&0.090&0.02&0.002&0.02&95 + &2.44&2.41&0.27&-0.02&0.27&95 + table [ simulation ] presents summaries of 100 simulations of estimates of the parameter . under both scenarios ,the mles of the fixed - effect coefficients fell near the true values on average , with no more than 0.5% bias for the parameters in the recall model and no more than 2.7% bias for those in the heaping model .the random effects variance estimates are also well estimated , with bias less than 1% .the coverage probabilities of nominal 95% confidence .2d3.2cd2.3cc@ & & & & & & + * parameter * & & & & & & + latent recall + &2.36&2.36&0.08&-0.003&0.08&90 + &0.26&0.26&0.02&0.001&0.02&90 + &0.30&0.30&0.02&-0.001&0.02&95 + heaping behavior + &-1.49&-1.61&0.55&-0.12&0.56&94 + &-5.28&-5.42&0.69&-0.14&0.70&96 + &-10.14&-10.61&3.56&-0.47&3.58&87 + &0.11&0.11&0.02&0.005&0.02&95 + &2.67&2.64&0.32&-0.03&0.32&92 + intervals range from 93% to 98% , except for in case 1 , where coverage is only 80% . the poor coverage rate for this parameter is a consequence of instability in the inverse hessian matrix ; it can be improved by creating parametric bootstrap confidence intervals ( table [ simulation2 ] ) .the simulation shows good performance of the mles , and , as the sample size is large , we expect the bayesian estimates to behave similarly .moreover , the maximization part of the mle calculation can help identify multimodality of the likelihood , should it occur , and singularity of the hessian that we use in the bayesian sampling .we applied the method of section [ sec2 ] to the shiffman data , with the aim of evaluating our posited two - stage process as an explanation for the discrepancy between actual and reported consumption . to focus on the link between the self - report and true count , our first analysis included only log ema count in ( [ recall ] ) and a visit day indicator in ( [ misreport ] ) .the latter is important because it seems reasonable that distance in time from the event would be a strong predictor of heaping coarseness .our second analysis expanded the recall model to include a range of baseline characteristics : demographics ( age , sex , race and education ) ; addiction ; measures of nicotine dependence [ the fagerstrm test for nicotine dependence ( ftnd ) and the nicotine dependence syndrome scale ( ndss ) ] ; and ema compliance measured as the daily percentage of missed prompts .age , education , ftnd and ema compliance are considered as quantitative variables , sex and race are binary indicators , and addiction is a categorical variable taking three levels ( possible , probable and definite ) .they are the first variables that a smoking researcher would think to investigate , and could potentially affect remembered count or heaping probability .the two measures of nicotine dependence ftnd and ndss showed only a modest correlation , with spearman in our data .so we considered both in the model .the data set and programming code are included in the supplementary materials [ ] .we evaluated model fit by creating multiple draws from the posterior predictive distribution of latent quantities as discussed in section [ sec3 ] .lack of smoothness in the histogram of the imputed latent count would suggest an inadequate heaping model .we evaluated goodness of fit for the model that includes log ema count in ( [ recall ] ) and a visit day indicator in ( [ misreport ] ) .the top row in figure [ ppc ] displays the histograms of tlfb cigarette consumption at days 3 ( a visit day ) , 9 and 14 .the spikes at 10 , 15 , 20 , 25 , 30 , etc .are characteristic of self - reported cigarette counts [ ] .as many as 70% of subjects reported cigarette smoking in multiples of 5 for nonvisit - day consumption , whereas for the visit day ( day 3 ) that number is only 48% .only of the counts on the visit day ended in .the next three rows represent independent draws of the latent count .the spikes at multiples of 20 , 10 or 5 have disappeared .compared to the self - reported count , the percentage of subjects whose exact counts are divisible by 5 ( or 10 or 20 ) is smaller and consistent across time .averaged over three imputations , the fraction of counts ending in multiples of is 27% , 25% and 23% on days 3 , 9 and 14 , respectively , and 15% , 14% and 12% end in multiples of .these checks indicate that our model offers a plausible explanation for the heaping . in order to assess the impact of the assumed correlation structure , we fit the model as proposed in ( [ recall ] ) and ( [ misreport ] ) and also a model that excludes random effects .posterior modes and 95% credible intervals ( cis ) appear in tables [ posterior1 ] and [ posterior2 ] .the estimates in both the remembered count model that .2cd2.2c@ & & + & & + & & & & + * parameter * & & & & + + intercept : &2.32& ] + (ema ) : &0.27& ] + &0.09& ] & -1.06& ] & -2.94& ] & -4.17& ] & 0.07& ] & -1.29& ] & + characterizes the latent recall process and the heaping behavior model are sensitive to the assumption of random effects .the bayesian information criterion ( bic ) of the model with two random effects is 14,705 when including ema as the only predictor and 14,059 when including ema and the baseline patient characteristic predictors .the bics for the corresponding models excluding random effects are 18,340 and 16,641 , respectively .thus , the evidence is overwhelming that the mixed model is preferable .furthermore , we included the patient characteristic predictors as covariates in both the remembered count model and heaping process model , but this model ( ) is less favorable compared to the model with the covariates in just the latent remembered count model .none of these predictors is significant in the heaping process model ( results not shown ) ..3cd2.3c@ & & + & & + & & & & + * parameter * & & & & + + intercept : &2.34& ] + (ema ) : &0.25& ] + addicted + possible vs. definite&0.07& ] + probable vs. definite&- 0.01& ] + ftnd & 0.06& ] + ndss & 0.08& ] + ema compliance & 0.13& ] + age & 0.002& ] + race ( black vs. white ) & -0.14& ] + sex ( male vs. female ) & 0.16& ] + education & -0.001& ] + &0.06& ] & -1.14& ] & -3.15& ] & -4.54& ] & 0.07& ] & -1.26& ] & + the 95% ci of is $ ] , indicating that remembered consumption is positively associated with recorded ema consumption .in addition , baseline patient characteristics ftnd , ndss , race and gender have significant effects on the recall process .for fixed ema count , the following characteristics are associated with greater remembered smoking : higher nicotine dependence ( measured by both ftnd and ndss ) , white ethnicity ( compared to black ) and male sex .figure [ expw ] displays the estimated curve of the mean of against the ema count .a natural hypothesis is that the estimated latent mean agrees with ema , which would be reflected in the poisson model by an estimated intercept of and slope of ; one might call this a model of unbiased memory . to the contrary, figure [ expw ] shows that the fitted mean curve diverges substantially from the line , with the lighter smokers on average overestimating their consumption and the heavier smokers underestimating consumption .the mean remembered consumption agrees with the true count roughly in the range 2226 cigarettes , or slightly more than a pack per day . high school , addicted , race , sex , and mean values of the quantitative predictors : ftnd.97 , , age.5 , and ema noncompliance.1% . ]figure [ heapprob ] shows the estimated heaping probability as a function of remembered cigarette consumption for visit and nonvisit days . the possibility of rounded - off reporting increases rapidly as the remembered count increases , although surprisingly the probability of rounding to the nearest 20 is not large for either type of day . when the perception of smoking is more than two packs , say , 41 cigarettes , the chance of heaped reporting rises to more than 84% , of which 37% is attributed to half - pack rounding .the results confirm that the degree of heaping is much smaller on visit days .for example , only 51% of subjects round off the visit - day count when reporting 41 cigarettes , and among those 39% round off to the nearest multiple of 5 .we have developed a model to describe the process whereby exact longitudinal measurements become distorted by retrospective recall .our approach uses latent processes to explain the data as a result of mis - remembering and rounding : a model of the latent exact value describes subject - level recall and allows for association over time and with baseline predictors , while a misreporting model describes the dependence of heaping coarseness on the latent value and other predictors .random effects represent individual propensities in recall and heaping ; in our data , inferences depend strongly on the inclusion of these random effects .the data suggest that both mis - remembering and heaping contribute substantially to the distortion of cigarette counts .the curve of mean remembered count as a function of ema count departs markedly from the line , with lighter smokers overstating consumption and heavier smokers understating consumption .the remembered smoking coincides with the accurate ema count at around 24 cigarettes , suggesting that the popularity of reporting one pack per day is partially a result of the general heaping behavior rather than a particular affinity for remembering a pack a day .the curves of heaping probabilities suggest that exact reporting is uncommon and practically disappears beyond about 40 cigarettes / day . nevertheless , it is interesting just how much of the misreporting is due to mis - remembering .the remembered cigarette consumption depends not only on true consumption , but also on the subject s sex , race and degree of nicotine dependence .the interpretation of our model components as representing memory and rounding depends on the assumption that ema data are exact .of course , even ema data are subject to errors , as smokers may neglect to record cigarettes both at the time of smoking and later .yet good correspondence with smoking biomarkers strongly supports the use of ema over tlfb as a proxy for the truth [ ] .we have implemented our model with a combination of standard numerical methods including gaussian quadrature , quasi - newton optimization and sampling - importance resampling .our experience suggests that with the model as specified , and incorporating a modest numbers of predictors , the method is robust and efficient .increasing the number of random effects would increase the time demands ( from the numerical integration ) and raise the possibility of numerical instability ( from possible errors in integration ) . for more extensive models , sophisticated approaches based on mcmc samplingwould be necessary .our model allows for the inclusion of covariates to better explain the discrepancy between smokers self - perceived behaviors and reality .it also provides a basis for predicting true counts ( effectively the ema data ) from reported tlfb counts .this would be a valuable activity in the large number of studies that do not collect ema data . to predict true counts from the recalled counts , we first need to estimate the parameters in the model using a subset of the primary study or an external independent study that collects both tlfb count and accurate ema count .then we can impute the true count together with the latent remembered count and heaped reporting behavior . specifically , the posterior distribution of is where is the density function of the true count .imputation follows similar steps as described in section [ modelchecking ] with set equal to the maximum likelihood estimates .the methods developed here also can have application in a wide variety of settings in social and medical science involving self - reported data for example , assessing sexual risk behavior , trial drug consumption , eating episodes and financial expenditures .we are grateful to two associate editors and a referee , whose perceptive comments and suggestions greatly improved the paper .
|
studies of smoking behavior commonly use the _ time - line follow - back _ ( _ tlfb _ ) method , or periodic retrospective recall , to gather data on daily cigarette consumption . tlfb is considered adequate for identifying periods of abstinence and lapse but not for measurement of daily cigarette consumption , thanks to substantial recall and digit preference biases . with the development of the hand - held electronic diary ( ed ) , it has become possible to collect cigarette consumption data using _ ecological momentary assessment _ ( _ ema _ ) , or the instantaneous recording of each cigarette as it is smoked . ema data , because they do not rely on retrospective recall , are thought to more accurately measure cigarette consumption . in this article we present an analysis of consumption data collected simultaneously by both methods from 236 active smokers in the pre - quit phase of a smoking cessation study . we define a statistical model that describes the genesis of the tlfb records as a two - stage process of mis - remembering and rounding , including fixed and random effects at each stage . we use bayesian methods to estimate the model , and we evaluate its adequacy by studying histograms of imputed values of the latent remembered cigarette count . our analysis suggests that both mis - remembering and heaping contribute substantially to the distortion of self - reported cigarette counts . higher nicotine dependence , white ethnicity and male sex are associated with greater remembered smoking given the ema count . the model is potentially useful in other applications where it is desirable to understand the process by which subjects remember and report true observations . , , + .
|
the ability to travel , trade commodities , and share information around the world with unprecedented efficiency is a defining feature of the modern globalized economy . among the different means of transport, ocean shipping stands out as the most energy efficient mode of long - distance transport for large quantities of goods ( rodrigue _ et al ._ 2006 ) . according to estimates , as much as 90% of world trade is hauled by ships ( international maritime organization 2006 ) . in 2006 , 7.4 billion tons of goods were loaded at the world s ports .the trade volume currently exceeds 30 trillion ton - miles and is growing at a rate faster than the global economy ( united nations conference on trade and development 2007 ) .the worldwide maritime network also plays a crucial role in today s spread of invasive species .two major pathways for marine bioinvasion are discharged water from ships ballast tanks ( ruiz _ et al . _ 2000 ) and hull fouling ( drake & lodge 2007 ) . even terrestrial species such as insectsare sometimes inadvertently transported in shipping containers ( lounibos 2002 ) . in several parts of the world, invasive species have caused dramatic levels of species extinction and landscape alteration , thus damaging ecosystems and creating hazards for human livelihoods , health , and local economies ( mack _ et al .the financial loss due to bioinvasion is estimated to be $ 120 billion per year in the united states alone ( pimentel _ et al ._ 2005 ) . despite affecting everybody s daily lives, the shipping industry is far less in the public eye than other sectors of the global transport infrastructure .accordingly , it has also received little attention in the recent literature on complex networks ( wei _ et al ._ 2007 , hu & zhu 2009 ) .this neglect is surprising considering the current interest in networks ( albert & barabasi 2002 , newman 2003a , gross & blasius 2008 ) , especially airport ( barrat _ et al ._ 2004 , guimer & amaral 2004 , hufnagel _ et al ._ 2004 , guimer _ et al ._ 2005 ) , road ( buhl _ et al ._ 2006 , barthelemy & flammini 2008 ) and train networks ( latora & marchiori 2002 , sen _ et al . _ 2003 ) . in the spirit of current network research , we take here a large - scale perspective on the global cargo ship network ( gcsn ) as a complex system defined as the network of ports that are connected by links if ship traffic passes between them . similar research in the past had to make strong assumptions about flows on hypothetical networks with connections between all pairs of ports in order to approximate ship movements ( drake & lodge 2004 , tatem _ et al .by contrast , our analysis is based on comprehensive data of real ship journeys allowing us to construct the actual network .we show that it has a small - world topology where the combined cargo capacity of ships calling at a given port ( measured in gross tonnage ) follows a heavy - tailed distribution .this capacity scales superlinearly with the number of directly connected ports .we identify the most central ports in the network and find several groups of highly interconnected ports showing the importance of regional geopolitical and trading blocks . a high - level description of the complete network , however , does not yet fully capture the network s complexity . unlike previously studied transportation networks , the gcsn has a multi - layered structure .there are , broadly speaking , three classes of cargo ships container ships , bulk dry carriers , and oil tankers that span distinct subnetworks .ships in different categories tend to call at different ports and travel in distinct patterns .we analyze the trajectories of individual ships in the gcsn and develop techniques to extract quantitative information about characteristic movement types . with these methodswe can quantify that container ships sail along more predictable , frequently repeating routes than oil tankers or bulk dry carriers .we compare the empirical data with theoretical traffic flows calculated by the gravity model .simulation results , based on the full gcsn data or the gravity model differ significantly in a population - dynamic model for the spread of invasive species between the world s ports .predictions based on the real network are thus more informative for international policy decisions concerning the stability of worldwide trade and for reducing the risks of bioinvasion .an analysis of global ship movements requires detailed knowledge of ships arrival and departure times at their ports of call .such data have become available in recent years .starting in 2001 , ships and ports have begun installing automatic identification system ( ais ) equipment .ais transmitters on board of the ships automatically report the arrival and departure times to the port authorities .this technology is primarily used to avoid collisions and increase port security , but arrival and departure records are also made available by lloyd s register fairplay for commercial purposes as part of its sea - web data base ( www.sea-web.com ) .ais devices have not been installed in all ships and ports yet , and therefore there are some gaps in the data .still , all major ports and the largest ships are included , thus the data base represents the majority of cargo transported on ships .our study is based on sea - web s arrival and departure records in the calendar year 2007 as well as sea - web s comprehensive data on the ships physical characteristics .we restrict our study to cargo ships bigger than gt ( gross tonnage ) which make up 93% of the world s total capacity for ship cargo transport . from these we select all shipsfor which ais data are available , taken as representative of the global traffic and long - distance trade between the ports equipped with ais receivers ( for details see electronic supplementary material ) .for each ship we obtain a trajectory from the data base , i.e. a list of ports visited by the ship sorted by date . in 2007 , there were nonstop journeys linking distinct pairs of arrival and departure ports .the complete set of trajectories , each path representing the shortest route at sea and colored by the number of journeys passing through it , is shown in fig .[ full_netw]a .each trajectory can be interpreted as a small directed network where the nodes are ports linked together if the ship traveled directly between the ports .larger networks can be defined by merging trajectories of different ships . in this articlewe aggregate trajectories in four different ways : the combined network of all available trajectories , and the subnetworks of container ships ( ships ) , bulk dry carriers ( ) and oil tankers ( ) .these three subnetworks combined cover 74% of the gcsn s total gross tonnage . in all four networks ,we assign a weight to the link from port to equal to the sum of the available space on all ships that have traveled on the link during 2007 measured in gt .if a ship made the journey from to more than once , its capacity contributes multiple times to .the directed network of the entire cargo fleet is noticeably asymmetric , with 59% of all linked pairs of ports being connected only in one direction .still , the vast majority of ports ( 935 out of 951 ) belongs to one single strongly connected component , i.e. for any two ports in this component there are routes in both directions , though possibly visiting different intermediate ports .the routes are intriguingly short : only few steps in the network are needed to get from one port to another .the shortest path length between two ports is the minimum number of nonstop connections one must take to travel between origin and destination . in the gcsn , the average over all pairs of ports is extremely small , .even the maximum shortest path between any two ports ( e.g. from skagway , alaska , to the small italian island of lampedusa ) , is only of length .in fact , the majority of all possible origin - destination pairs ( 52% ) can already be connected by two steps or less . comparing these findings to those reported for the worldwide airport network ( wan )shows interesting differences and similarities .the high asymmetry of the gcsn has not been found in the wan , indicating that ship traffic is structurally very different from aviation . rather than being formed by the accumulation of back and forth trips , ship traffic seems to be governed by an optimal arrangement of unidirectional , often circular routes .this optimality also shows in the gcsn s small shortest path lengths . in comparison , in the wan , the average and maximum shortest path lengths are and respectively ( guimer _ et al ._ 2005 ) , i.e. about twice as long as in the gcsn .similar to the wan , the gcsn is highly clustered : if a port is linked to ports and , there is a high probability that there is also a connection from to .we calculated a clustering coefficient ( watts & strogatz 1998 ) for directed networks and find whereas random networks with the same number of nodes and links only yield on average .degree dependent clustering coefficients reveal that clustering decreases with node degree ( see electronic supplementary material ) .therefore , the gcsn like the wan can be regarded as a small - world network possessing short path lengths despite substantial clustering ( watts & strogatz 1998 ) . however , the average degree of the gcsn , i.e. the average number of links arriving at and departing from a given port ( in- plus out - degree ) , , is notably higher than in the wan where ( barrat _ et al ._ 2004 ) . in the light of the network size ( the wan consists of 3880 nodes ) , this difference becomes even more pronounced , indicating that the gcsn is much more densely connected .this redundancy of links gives the network high structural robustness to the loss of routes for keeping up trade .the degree distribution shows that most ports have few connections , but there are some ports linked to hundreds of other ports ( fig .[ degree_distribution]a ) .similar right - skewed degree distributions have been observed in many real - world networks ( barabasi & albert 1999 ) . while the gcsn s degree distribution is not exactly scale - free , the distribution of link weights , , follows approximately a power law with ( 95% ci for linear regression , fig .[ degree_distribution]b , see also electronic supplementary material ) . by averaging the sums of the link weights arriving at and departing from port , we obtain the node strength ( barrat _ et al .the strength distribution can also be approximated by a power law with , meaning that a small number of ports handle huge amounts of cargo ( fig .[ degree_distribution]c ) . the determination of power law relationships by line fitting has been strongly criticised ( e.g. newman 2005 , clauset _ et al . _ 2009 ) , therefore we analysed the distributions with model selection by akaike weights ( burnham & anderson 1998 ) .our results confirm that a power law is a better fit than an exponential or a log - normal distribution for and , but not ( see electronic supplementary material ) .these findings agree well with the concept of hubs - spokes networks ( notteboom 2004 ) that were proposed for cargo traffic , for example in asia ( robinson 1998 ) .there are a few large , highly connected ports through which all smaller ports transact their trade .this scale - free property makes the ship trade network prone to the spreading and persistence of bioinvasive organisms ( e.g. pastor - satorras & vespignani 2001 ) .the average nearest neighbors s degrees , a measure of network assortativity , additionally underline the hubs - spokes property of cargo ship traffic ( see electronic supplementary material ) . strengths and degrees of the portsare related according to the scaling relation ( 95% ci for sma regression , warton _hence , the strength of a port grows generally faster than its degree ( fig .[ degree_distribution]d ) . in other words ,highly connected ports not only have many links , but their links also have a higher than average weight .this observation agrees with the fact that busy ports are better equipped to handle large ships with large amounts of cargo .a similar result , , was found for airports ( barrat _ et al ._ 2004 ) , which may hint at a general pattern in transportation networks . in the light of bioinvasion , these results underline empirical findings that big ports are more heavily invaded because of increased propagule pressure by ballast water of more and larger ships ( mack _ et al . _ 2000 , williamson 1996 , see e.g. cohen & carlton 1998 ) .a further indication of the importance of a node is its betweenness centrality ( freeman 1979 , newman 2004 ) .the betweenness of a port is the number of topologically shortest directed paths in the network that pass through this port . in fig .[ full_netw]b we plot and list the most central ports . generally speaking , centrality and degreeare strongly correlated ( pearson s correlation coefficient : ) , but in individual cases other factors can also play a role .the panama and suez canal , for instance , are shortcuts to avoid long passages around south america and africa .other ports have a high centrality because they are visited by a large number of ships ( e.g. shanghai ) whereas others gain their status primarily by being connected to many different ports ( e.g. antwerp ) .to compare the movements of cargo ships of different types , separate networks were generated for each of the three main ship types : container ships , bulk dry carriers , and oil tankers . applying the network parameters introduced in the previous section to these three subnetworks reveals some broad - scale differences ( see table [ tab : trajs ] ) .the network of container ships is densely clustered , , has a rather low mean degree , , and a large mean number of journeys ( i.e. number of times any ship passes ) per link , . the bulk dry carrier network , on the other hand , is less clustered , has a higher mean degree , and fewer journeys per link ( , , ) . for the oil tankers ,we find intermediate values ( , , ) .note that the mean degrees of the subnetworks are substantially smaller than that of the full gcsn , indicating that different ship types use essentially the same ports but different connections .a similar tendency appears in the scaling of the link weight distributions ( fig .[ degree_distribution]b ) . can be approximated as power laws for each network , but with different exponents .the container ships have the smallest exponent ( ) and bulk dry carriers the largest ( ) with oil tankers in between ( ) .in contrast , the exponents for the distribution of node strength are nearly identical in all three subnetworks , , and , respectively .these numbers give a first indication that different ship types move in distinctive patterns .container ships typically follow set schedules visiting several ports in a fixed sequence along their way , thus providing regular services .bulk dry carriers , by contrast , appear less predictable as they frequently change their routes on short notice depending on the current supply and demand of the goods they carry . the larger variety of origins and destinations in the bulk dry carrier network ( ports , compared to for container ships ) explains the higher average degree and the smaller number of journeys for a given link .oil tankers also follow short - term market trends , but , because they can only load oil and oil products , the number of possible destinations ( ) is more limited than for bulk dry carriers .these differences are also underlined by the betweenness centralities of the three network layers ( see electronic supplementary material ) .while some ports rank highly in all categories ( e.g. suez canal , shanghai ) , others are specialized on certain ship types .for example , the german port of wilhelmshaven ranks tenth in terms of its world - wide betweenness for oil tankers , but is only 241st for bulk dry carriers and 324th for container ships .we can gain further insight into the roles of the ports by examining their community structure .communities are groups of ports with many links within the groups but few links between different groups .we calculated these communities for the three subnetworks with a modularity optimization method for directed networks ( leicht & newman 2008 ) and found that they differ significantly from modularities of corresponding erds - renyi graphs ( fig .[ modularity ] , guimer _ et al .the network of container trade shows 12 communities ( fig .[ modularity]a ) .the largest ones are located ( 1 ) on the arabian , asian , and south african coasts , ( 2 ) on the north american east coast and in the caribbean , ( 3 ) in the mediterranean , the black sea , and on the european west coast , ( 4 ) in northern europe , and ( 5 ) in the far east and on the american west coast . the transport of bulk dry goods reveals 7 groups ( fig .[ modularity]b ) . some can be interpreted as geographic entities ( e.g. north american east coast , trans - pacific trade ) while others are dispersed on multiple continents .especially interesting is the community structure of the oil transportation network which shows 6 groups ( fig .[ modularity]c ) : ( 1 ) the european , north and west african market ( 2 ) a large community comprising asia , south africa and australia , ( 3 ) three groups for the atlantic market with trade between venezuela , the gulf of mexico , the american east coast and northern europe , and ( 4 ) the american pacific coast .it should be noted that the network includes the transport of crude oil as well as commerce with already refined oil products so that oil producing regions do not appear as separate communities .this may be due to the limit in the detectability of smaller communities by modularity optimization ( fortunato & barthlemy 2007 ) , but does not affect the relevance of the revealed ship traffic communities . because of the , by definition , higher transport intensity within communities , bioinvasive spread is expected to be heavier between ports of the same community .however , in fig .[ modularity ] it becomes clear that there are no strict geographical barriers between communities .thus , spread between communities is very likely to occur even on small spatial scales by shipping or ocean currents between close - by ports that belong to different communities . despite the differences between the three main cargo fleets ,there is one unifying feature : their motif distribution ( milo _ et al ._ 2002 ) . like most previous studies, we focus here on the occurrence of three - node motifs and present their normalized score , a measure for their abundance in a network ( fig .[ motif ] ) .strikingly , the three fleets have practically the same motif distribution .in fact , the scores closely resemble those found in the world wide web and different social networks which were conjectured to form a superfamily of networks ( milo _ et al .this superfamily displays many transitive triplet interactions ( i.e. if and , then ) ; for example , the overrepresented motif 13 in fig . [ motif ] , has six such interactions .intransitive motifs , like motif 6 , are comparably infrequent .the abundance of transitive interactions in the ship networks indicates that cargo can be transported both directly between ports as well as via several intermediate ports .thus , the high clustering and redundancy of links ( robustness to link failures ) appears not only in the gcsn but also in the three subnetworks .the similarity of the motif distributions to other humanly optimized networks underlines that cargo trade , like social networks and the world wide web , depends crucially on human interactions and information exchange . while advantageous for the robustness of trade , the clustering of links as triplets also has an unwanted side effect : in general , the more clustered a network , the more vulnerable it becomes to the global spread of alien species , even for low invasion probabilities ( newman 2003b ) .going beyond the network perspective , the data base also provides information about the movement characteristics per individual ship ( table [ tab : trajs ] ) .the average number of distinct ports per ship does not differ much between different ship classes , but container ships call much more frequently at ports than bulk dry carriers and oil tankers .this difference is explained by the characteristics and operational mode of these ships .normally , container ships are fast ( between 20 and 25 knots ) and spend less time ( days on average in our data ) in the port for cargo operations .by contrast , bulk dry carriers and oil tankers move more slowly ( between 13 and 17 knots ) and stay longer in the ports ( on average days for bulk dry carriers , days for oil tankers ) . the speed at sea and of cargo handling , however , is not the only operational difference .the topology of the trajectories also differs substantially .characteristic sample trajectories for each ship type are presented in fig .[ distributions_p]a - c . the container ship ( fig .[ distributions_p]a ) travels on some of the links several times during the study period whereas the bulk dry carrier ( fig .[ distributions_p]b ) passes almost every link exactly once . the oil tanker ( fig .[ distributions_p]c ) commutes a few times between some ports , but by and large also serves most links only once .we can express these trends in terms of a `` regularity index '' that quantifies how much the frequency with which each link is used deviates from a random network .consider the trajectory of a ship calling times at distinct ports and travelling on distinct links .we compare the mean number of journeys per link to the average link usage in an ensemble of randomized trajectories with the same number of nodes and port calls . to quantify the difference between real and random trajectories we calculate the score ( where is the standard deviation of in the random ensemble ) . if , the real trajectory is indistinguishable from a random walk , whereas larger values of indicate that the movement is more regular .figures [ distributions_p]d - f present the distributions of the regularity index for the different fleets . for container ships , is distributed broadly around , thus supporting our earlier observation that most container ships provide regular services between ports along their way .trajectories of bulk dry carriers and oil tankers , on the other hand , appear essentially random with the vast majority of ships near .in this article , we view global ship movements as a network based on detailed arrival and departure records . until recently , surveys of seaborne trade had to rely on far less data : only the total number of arrivals at some major ports were publicly accessible , but not the ships actual paths ( zachcial & heideloff 2001 ) .missing information about the frequency of journeys , thus , had to be replaced by plausible assumptions , the gravity model being the most popular choice .it posits that trips are , in general , more likely between nearby ports than between ports far apart .if is the distance between ports and , the decline in mutual interaction is expressed in terms of a distance deterrence function .the number of journeys from to then takes the form , where is the total number of departures from port and the number of arrivals at ( haynes & fotheringham 1984 ) .the coefficients and are needed to ensure and .how well can the gravity model approximate real ship traffic ?we choose a truncated power law for the deterrence function , .the strongest correlation between model and data is obtained for and km ( see electronic supplementary material ) . at first sight , the agreement between data and model appears indeed impressive .the predicted distribution of travelled distances ( fig .[ gravity]a ) fits the data far better than a simpler non - spatial model that preserves the total number of journeys , but assumes completely random origins and destinations. a closer look at the gravity model , however , reveals its limitations . in fig .[ gravity]b we count how often links with an observed number of journeys are predicted to be passed times .ideally all data points would align along the diagonal , but we find that the data are substantially scattered . although the parameters and were chosen to minimize the scatter , the correlation between data and model is only moderate ( kendall s ) . in some cases ,the prediction is off by several thousand journeys per year .recent studies have used the gravity model to pinpoint the ports and routes central to the spread of invasive species ( drake & lodge 2004 , tatem _ et al .the model s shortcomings pose the question how reliable such predictions are . for this purpose, we investigated a dynamic model of ship - mediated bioinvasion where the weights of the links are either the observed traffic flows or the flows of the gravity model .we follow previous epidemiological studies ( rvachev & longini 1985 , flahault _ et al ._ 1988 , hufnagel _ et al ._ 2004 , colizza _ et al ._ 2006 ) in viewing the spread on the network as a metapopulation process where the population dynamics on the nodes are coupled by transport on the links . in our model, ships can transport a surviving population of an invasive species with only a small probability on each journey between two successively visited ports .the transported population is only a tiny fraction of the population at the port of origin . immediately after arriving at a new port , the species experiences strong demographic fluctuations which lead in most cases to the death of the imported population .if however the new immigrants beat the odds of this `` ecological roulette '' ( carlton & geller 1993 ) and establish , the population grows rapidly following the stochastic logistic equation with growth rate and gaussian white noise . for details of the model, we refer to the electronic supplementary material . starting from a single port at carrying capacity , we model contacts between ports as poisson processes with rates ( empirical data ) or ( gravity model ) . as shown in fig .[ metapop]a , the gravity model systematically overestimates the spreading rate , and the difference can become particularly pronounced for ports which are well - connected , but not among the central hubs in the network ( fig .[ metapop]b ) . comparing typical sequences of infected ports , we find that the invasions driven by the real traffic flows tend to be initially confined to smaller regional ports , whereas in the gravity model the invasions quickly reach the hubs. the total out- and in - flows at the ship journeys origin and departure ports , respectively , are indeed more strongly positively correlated in reality than in the model ( vs. ) .the gravity model thus erases too many details of a hierarchical structure present in the real network .that the gravity model eliminates most correlations , is also plausible from simple analytic arguments , see electronic supplementary material for details .the absence of strong correlations makes the gravity model a suitable null hypothesis if the correlations in the real network are unknown , but several recent studies have shown that correlations play an important role in spreading processes on networks ( e.g. newman 2002 , bogua & pastor - satorras 2002 ) . hence , if the correlations are known , they should not be ignored . while we observed that the spreading rates for the ais data were consistently slower than for the gravity model even when different parameters or population models were considered , the time scale of the invasion is much less predictable .the assumption that only a small fraction of invaders succeed outside their native habitat appears realistic ( mack _ et al .furthermore , the parameters in our model were adjusted so that the per - ship - call probability of initiating invasion is approximately , a rule - of - thumb value stated by drake & lodge ( 2004 ) . still , too littleis empirically known to pin down individual parameters with sufficient accuracy to give more than a qualitative impression .it is especially difficult to predict how a potential invader reacts to the environmental conditions at a specific location .growth rates certainly differ greatly between ports depending on factors such as temperature or salinity , with respect to the habitat requirements of the invading organisms .our results should , therefore , be regarded as one of many different conceivable scenarios .a more detailed study of bioinvasion risks based on the gcsn is currently underway ( seebens & blasius 2009 ) .we have presented a study of ship movements based on ais records . viewing the ports as nodes in a network linked by ship journeys, we found that global cargo shipping , like many other complex networks investigated in recent years , possesses the small world property as well as broad degree and weight distributions .other features , like the importance of canals and the grouping of ports into regional clusters , are more specific to the shipping industry . an important characteristic of the networkare differences in the movement patterns of different ship types .bulk dry carriers and oil tankers tend to move in a less regular manner between ports than container ships .this is an important result regarding the spread of invasive species because bulk dry carriers and oil tankers often sail empty and therefore exchange large quantities of ballast water . the gravity model , which is the traditional approach to forecasting marine biological invasions , captures some broad trends of global cargo trade , but for many applications its results are too crude .future strategies to curb invasions will have to take more details into account .the network structure presented in this article can be viewed as a first step in this direction . ship class & ships & mgt & & & & & & & & & & & whole fleet & 16363 & 664.7 & 951 & * * 76.4 & 0.49 & 2.5 & 13.57 & 1.71 & 1.02 & 10.4 & 15.6 & 31.8 & 0.63 + container ships & 3100 & 116.8 & * * 378 & 32.4 & 0.52 & 2.76 & * * 24.25 & * * 1.42 & 1.05 & 11.2 & * * 21.2 & * * 48.9 & * * 1.84 + bulk dry carriers & 5498 & 196.8 & 616 & 44.6 & 0.43 & 2.57 & 4.65 & 1.93 & 1.13 & 8.9 & 10.4 & 12.2 & 0.03 + oil tankers & 2628 & 178.4 & 505 & 33.3 & 0.44 & 2.74 & 5.07 & 1.73 & 1.01 & 9.2 & 12.9 & 17.7 & 0.19 + + we thank b. volk , k. h. holocher , a. wilkinson , j. m. drake and h. rosenthal for stimulating discussions and helpful suggestions .we also thank lloyd s register fairplay for providing their shipping data base .this work was supported by german vw - stiftung and bmbf .m. bogua and r. pastor - satorras .epidemic spreading in correlated complex networks ._ physical review e _ , 66:0 047104 , 2002. j. buhl , j. gautrais , n. reeves , r. v. sol , s. valverde , p. kuntz , and g. theraulaz .topological patterns in street networks of self - organized urban settlements .european physical journal b _ , 49:0 513522 , 2006 .v. colizza , a. barrat , m. barthlemy , and a. vespignani .the role of the airline transportation network in the prediction and predictability of global epidemics . _ proc ._ , 103:0 20152020 , 2006 .r. guimer , s. mossa , a. turtschi , and l. a. n. amaral .the worldwide air transportation network : anomalous centrality , community structure , and cities global roles ._ , 102:0 77947799 , 2005 .r. n. mack , d. simberloff , w. m. lonsdale , h. evans , m. clout , and f. a. bazzaz .biotic invasions : causes ,epidemiology , global consequences , and control ._ ecological applications _ , 10:0 689710 , 2000. m. e. j. newman . who is the best connected scientist ?a study of scientific coauthorship networks . in e. ben - naim , h. frauenfelder , and z. toroczkai , editors ,_ complex networks _ , pages 337370 .springer , berlin , 2004 .motif distributions of the three main cargo fleets . a positive ( negative )normalized score indicates that a motif is more ( less ) frequent in the real network than in random networks with the same degree sequence . for comparison, we overlay the scores of the world wide web and social networks .the agreement suggests that the ship networks fall in the same superfamily of networks ( milo _ et al .the motif distributions of the fleets are maintained even when 25% , 50% and 75% of the weakest connections are removed . ,scaledwidth=100.0% ] sample trajectories of ( a ) a container ship with a regularity index , ( b ) a bulk dry carrier , , ( c ) an oil tanker , . in the three trajectories ,the numbers and the line thickness indicate the frequency of journeys on each link .( d)-(f ) distribution of for the three main fleets.,scaledwidth=90.0% ] ( a ) histogram of port - to - port distances travelled in the gcsn ( navigable distances around continents as indicated in fig .[ full_netw ] ) .we overlay the predictions of two different models . the gravity model ( red ) , based on information about distances between ports and total port calls ,gives a much better fit than a simpler model ( blue ) which only fixes the total number of journeys .( b ) count of port pairs with observed and predicted journeys .the flows were calculated with the gravity model ( rounded to the nearest integer ) .some of the worst outliers are highlighted in blue . : antwerp to calais ( vs. ) . : hook of holland to europoort ( vs. ) . : calais to dover ( vs. ) . : harwich to hook of holland ( vs. ).,scaledwidth=100.0% ] results from a stochastic population model for the spread of an invasive species between ports .( a ) the invasion starts from one single , randomly chosen port .( b ) the initial port is fixed as bergen ( norway ) , an example of a well - connected port ( degree ) which is not among the central hubs .the rate of journeys from port to per year is assumed to be ( real flows from the gcsn ) or ( gravity model ) .each journey has a small probability of transporting a tiny fraction of the population from origin to destination .parameters were adjusted ( , , ) to yield a per - ship - call probability of initiating invasion of ( drake & lodge 2004 , see electronic supplementary material for details ) . plotted are the cumulative numbers of invaded ports ( population number larger than half the carrying capacity ) averaged over ( a ) , ( b ) simulation runs ( standard error equal to line thickness ) ., scaledwidth=70.0% ]
|
transportation networks play a crucial role in human mobility , the exchange of goods , and the spread of invasive species . with 90% of world trade carried by sea , the global network of merchant ships provides one of the most important modes of transportation . here we use information about the itineraries of 16,363 cargo ships during the year 2007 to construct a network of links between ports . we show that the network has several features which set it apart from other transportation networks . in particular , most ships can be classified in three categories : bulk dry carriers , container ships and oil tankers . these three categories do not only differ in the ships physical characteristics , but also in their mobility patterns and networks . container ships follow regularly repeating paths whereas bulk dry carriers and oil tankers move less predictably between ports . the network of all ship movements possesses a heavy - tailed distribution for the connectivity of ports and for the loads transported on the links with systematic differences between ship types . the data analyzed in this paper improve current assumptions based on gravity models of ship movements , an important step towards understanding patterns of global trade and bioinvasion .
|
quantum technologies are becoming reality , with huge efforts being devoted to developing scalable quantum computers and robust quantum communications , e.g. , for building a future quantum internet . in this global scenario , quantum key distribution ( qkd ) is certainly one of the most advanced areas , with intense research activities directed towards practical implementations .qkd represents a set of strategies that , integrating both quantum and classical communication , allow two authorized remote users ( alice and bob ) to generate a random sequence of bits ; this is then used as an encryption key in a one - time pad protocol , therefore providing an unconditionally secure ( information - theoretic ) private communication between the remote users .the effectiveness of qkd relies on the ground rule of encoding classical information in non - orthogonal quantum states , that are then transmitted through a noisy quantum channel controlled by the eavesdropper ( eve ) .this is also equivalent to sending the non - orthogonal part of discordant quantum states . in this way , eve s attack is bounded by fundamental laws of quantum physics : any information gained by eve creates loss and noise on the quantum channel .thanks to this trade - off , alice and bob can accurately quantify the amount of classical error correction and privacy amplification needed to reduce eve s stolen information to a negligible amount . since the first proposals to implement quantum information and computational tasks , continuous variable ( cv ) systems have attracted increasing attention .the fact of using quantum systems with continuous spectra ( infinite - dimensional hilbert spaces ) has several advantages with respect to the traditional approach based on discrete variables ( qubits ) .in particular , one can implement qkd at _ high rates _ by using highly - modulated coherent states and homodyne detections , not only in one - way schemes , but also in two - way protocols and cv strategies based on measurement - device independence ( mdi ) ideal implementations of cv - qkd provide the highest key rates , not so far from the ultimate repeaterless bound recently established in ref . . for a lossy channel with transmissivity , the maximum rate achievable by any qkd protocol ( secret - key capacity ) is equal to , with a fundamental rate - loss scaling of bits per channel use for long distances , i.e. , at high loss .the most practical one - way cv - qkd protocols , i.e. , the switching and no - switching protocols , can potentially reach an asymptotic long - distance rate of bits per use , which is half the secret key capacity . similar performance for cv - mdi - qkd in the most asymmetric configuration . in this workwe deepen the study of the secret key rates of the most known one - way cv - qkd protocols .in particular , we explicitly study their security in the presence of gaussian two - mode attacks , representing the residual eavesdropping strategy after the de finetti symmetrization over two - mode blocks . under these attacks, we derive the analytical expressions of the asymptotic key rates . with these in hands, we show that eavesdropping strategies based on correlated ancillas turn out to be _ strictly _ less effective than gaussian attacks based on uncorrelated ancillas ( single - mode attacks ) .in other words , any two - mode gaussian attack with strictly non - zero correlations improves alice and bob s key rate .coherent states in an independent and identical fashion .these are sent through a quantum channel ( eve ) and received by bob , whose measurements provide the classical variables for .eve s general eavesdropping is based on a global unitary operation , , applied to the instances of the one - way communication . * ( b ) * after random permutations , the coherence of the general attack is confined within each two - mode block . * ( c ) * within an arbitrary block , we show a gaussian two - mode attack against the protocol ( in eb representation ) .a realistic gaussian attack is simulated by two beam splitters , with transmissivity , mixing alice s signals , and , with eve s ancillary modes , and , belonging to a larger set of modes in her hands .the reduced state of modes and is gaussian with thermal noise and correlation matrix as in eq .( [ veve]).,scaledwidth=35.0% ]let us consider the communication scheme of fig .[ scheme](a ) .alice sends to bob coherent states .the amplitudes , for , are independently and identically modulated by a bivariate zero mean gaussian distribution of variance .the communication channel is under eve s control , and the output detections provide bob with classical outcomes .after uses of the channel , the parties share two correlated random sequences of symbols given by the sets and . for the sake of clarity , we consider reverse reconciliation ( rr ) , so that the key is obtained by alice inferring bob s variables . now , when bob applies homodyne __ detections , randomly switching between measurements on quadrature and , we have the switching protocol . by contrast , when bob measures both quadratures ( heterodyne detection ) , we have the no - switching protocol . herewe discuss the latter case , while we leave the analysis of the switching protocol in appendix [ app1 ] . in a general attack, eve applies a global unitary operation , which coherently process her ancillary modes with all the signals exchanged by the parties , with the ancillary outputs stored in a quantum memory .one has that bob - eve joint system is described by a quantum state in the following form is eve s total input state .the security analysis considering this general scenario is not a practically solvable problem but , in the limit of , it has been proved that one can get rid of the cross - correlations between different uses of the channel .more specifically , with no loss of generality , the security analysis can be simplified by applying symmetric random permutations on the input ( ) and output ( ) classical data - sets .note that alice and bob may arrange the signals into two - mode blocks , with .then , they can apply random permutations over the blocks rather than over the single uses of the channel .after such a symmetrization , the quantum state given in eq .( [ qc ] ) can be rewritten as the following tensor product where is large . after this symmetrization , the initial global coherence of quantum state of eq .( [ qc ] ) is reduced to that one enclosed within each two - mode state , associated with the arbitrary block , as also depicted in fig .[ scheme ] ( b ) .the only effective coherence to consider is two - mode and this scenario can be further simplified using the extremality of gaussian states . in other words ,the previous assumptions allow us to reduce the general eavesdropping strategy to a gaussian two - mode attack within each block .in particular , we may consider the most realistic form of such an attack , where eve exploits two beam splitters to combine alice s signals with correlated ancillas prepared in an arbitrary gaussian state .see fig .[ scheme](c ) .note that this is a reduction which is often considered in practice .the security analysis of one - way cv - qkd protocols under collective ( single - mode ) gaussian attacks is typically restricted to the most practical case of entangling - cloner attacks , resulting in thermal - loss channels between alice and bob .the optimal key rate achievable over this channel has been recently upper - bounded in ref . and lower - bounded in ref .the security analysis is performed in the entanglement based ( eb ) representation , as also shown in fig .[ scheme](c ) .alice owns a source of two - mode squeezed vacuum ( tmsv ) states .these are zero - mean gaussian states with covariance matrix ( cm ) of the form{cc}% \mu\mathbf{i } & \sqrt{\mu^{2}-1}\mathbf{z}\\ \sqrt{\mu^{2}-1}\mathbf{z } & \mu\mathbf{i}% \end{array } \right ) , \label{epr}%\ ] ] where , and . in each block ,alice s input state is gaussian of the form and cm .the signal coherent states and are remotely projected on modes , and , by applying heterodyne detections on local modes and . in this way alice modulates the amplitudes and according to a zero - mean gaussian distribution with variance ( which is typically large ) .as previously mentioned , we assume a realistic gaussian two - mode attack where eve employs two identical beam - splitters with transmissivity . these are used to mix alice s input modes , and , with eve s ancillary modes , and , respectively .the latter belong to a larger set of ancillary states owned by the eavesdropper .the reduced gaussian state is completely determined by the following cm {cc}% \omega\mathbf{i } & \mathbf{g}\\ \mathbf{g } & \omega\mathbf{i}% \end{array } \right ) , \text { for } \mathbf{g}:=\left ( \begin{array } [ c]{cc}% g & 0\\ 0 & g^{\prime}% \end{array } \right ) , \label{veve}%\ ] ] where quantifies eve s thermal noise , with mean number of thermal photons .the correlations between modes and are described by the parameters and in the matrix .their values are bounded by the constraints which are imposed by the the uncertainty principle .note that from the cm of eq .( [ veve ] ) , one can recover the standard collective attack scenario ( single mode attack ) for . in the ideal case of perfect rr efficiency , the key - rate ( bit per channel use )is defined as where is the mutual information between variables and and is eve s accessible information on bob s variables ( factor accounts for the double use of the channel within each block ) . for many uses of the channel , is bounded by the holevo information here is the entropy of eve s reduced state which is equal to the entropy of alice and bob s joint state ( because the global state of alice , bob and eve is pure ) .then , is the entropy of eve s state conditioned on bob variables and ; because these are the outcomes of a rank-1 measurement , we have that alice s conditional state has entropy .nore that , for gaussian states , the von neumann entropy can be computed via the formula where are symplectic eigenvalues and by replacing in eq .( [ keyrate1 ] ) with the holevo function of eq .( [ holevo - def ] ) , one obtains the following ideal key - rate ( in rr) a consequence of the two - mode reduction strategy , alice and bob s mutual information is given by where is the contribution from the first channel use , and from the second use .each contribution is given by the following expression where describes the quadrature variance of the average thermal state arriving at bob s side , while is the quadrature variance of bob s state after alice s heterodyne detection .using these relations in eqs .( [ iab1 ] ) and ( [ iab2 ] ) , and working in the limit of , one easily obtains we note that , as one would expect , this expression does not depend on the correlation parameters and .we now describe the general steps to obtain the holevo bound ( more details are in appendix [ app - crypto ] ) . working in the eb representation , alice and bob s joint state described by the following cm{cccc}% ( \mu+1)\mathbf{i } & & \phi\mathbf{z } & \\ & ( \mu+1)\mathbf{i } & & \phi\mathbf{z}\\ \phi\mathbf{z } & & \lambda\mathbf{i } & ( 1-\tau)\mathbf{g}\\ & \phi\mathbf{z } & ( 1-\tau)\mathbf{g } & \lambda\mathbf{i}% \end{array } \right ) , \label{vtot - text}%\ ] ] where we have set } , \label{la2}%\end{aligned}\ ] ] the symplectic spectrum is obtained from the ordinary eigenvalues of matrix with {cc}% 0 & 1\\ -1 & 0 \end{array } \right ) ~.\ ] ] in the limit of large , and after some simple algebra , we find the following symplectic eigenvalues using these eigenvalues and the expansion we find the following expression for alice and bob s von neumann entropy the next step is to apply two sequential heterodyne detections on modes and , to obtain the conditional cm describing the conditional quantum state .the corresponding cm has a complicated expression that can be found in eq .( [ vc - noswitch ] ) of appendix [ app - crypto ] .computing its symplectic eigenvalues in the limit of , we find the following conditional spectrum where we have defined the conditional entropy just reads finally , using eqs .( [ vonqqq ] ) and ( [ total - vonneumannhethet ] ) in eq .( [ holevo - def ] ) , we can write eve s holevo bound as . \label{holevo}%\ ] ] it is easy to check that eq .( [ holevo ] ) recovers the expression of the holevo bound of standard collective ( single - mode ) gaussian attacks for .it is easy to compute the secret - key rate using eq .( [ iab3 ] ) and ( [ holevo ] ) in eq .( [ key - rate - teo ] ) .after some algebra , we obtain the following expression for the rate of the no - switching protocol under realistic gaussian two - mode attacks}\nonumber\\ & + \frac{1}{2}\sum_{i=\pm}\left [ h(\bar{\nu}_{i})-h(\nu_{i})\right ] . \label{ratehet2}%\end{aligned}\ ] ] in order to prove that gaussian two - mode attacks with non - zero correlations are strictly less effective than single - mode attacks , we study the derivatives of this rate .we find the following strict inequality the details of the proof are in appendix [ app - crypto ] , while here we limit the discussion to the general ideas . to show eq .( [ r - rcoll ] ) , we first seek for critical points of the function . solving the equation on the -plane , one finds that only the origin is critical . to determine the nature of , we then compute the second - order derivatives with respect the correlation parameters and .this allows us to compute the expression of the hessian matrix and study its positive definiteness .we therefore find that corresponds to the absolute minimum of the rate in eq .( [ ratehet2 ] ) within the domain defined by eq .( [ const ] ) .finally we check that the attacks over the boundary , given by the condition , also provide key rates which are strictly larger than that under the single - mode attack . in fig . [ rate - boundaries ] we show a numerical example , which is obtained by fixing the transmissivity , the thermal noise , and plotting the rate as a function of and .we see that the secret - key rate under single - mode attack ( red dot ) is always strictly less than that the rate which is obtained by any physically - permitted two - mode attack ( which is a point in the colored surface ) .the key rates for the attacks on the boundary of this region are the blue dots . ) over the plane of the correlation parameters , and .any two - mode attack corresponds to a point in the colored surface .boundary attacks , verifying the condition , are represented by the blue points .the rate of the single - mode attack is the red spot .here we fix and . for these values, the single - mode attack provides zero key - rate .on the other hand , we see that the key rate is positive for any two - mode attack with non - zero correlations.,scaledwidth=52.0% ] the origin is therefore always an absolute minimum for . as a consequence , any correlation injected into the channels by the eavesdropper to implement the coherent attack automatically increases the key rate .in this work we have explicitly studied the security of one - way cv - qkd protocols against gaussian two - mode attacks .the approach is based on an attack - reduction strategy where the parties pack the uses of the quantum channel in two - mode blocks . then, they apply random permutations over these blocks .this allows them to get rid of any cross correlation engineered by the eavesdropper between different blocks .we solved this problem analytically , and we obtained the secret - key rates under gaussian two - mode attacks , in particular , those more realistic and based on a suitable combination of entangling cloners .we have then showed that any non - zero correlation used by the eavesdropper leads to a strictly higher key - rate than the rate obtained under gaussian single - mode attacks .this is achieved under the condition that infinite signals are exchanged ( asymptotic rate ) , therefore not considering composable or finite - size analyses .we conjecture that the use of correlations is not effective even when the size of the blocks is greater than two modes. it would be interesting to check if this is still true if alice adopted correlated encodings between different uses of the channel .this work has been supported by the epsrc via the ` uk quantum communications hub ' ( grant no .ep / m013472/1 ) .here we provide the calculations to prove eq .( [ r - rcoll ] ) for the no - switching protocol .let be the vectorial quadrature operator describing a general mode .the impact of the attenuation and noise on the alice s modes , and , through two identical beam splitters of transmissivity are given by the following expressions where * * and are the vectorial quadrature operators describing eve s ancillary modes , and , mixed at the beam splitters with modes and , respectively .eve s reduced state is zero - mean gaussian with cm as in eq .( [ veve ] ) , with local thermal noise and correlation parameters fulfilling the constraints of eq .( [ const ] ) .we order alice and bob s output modes as follows ; then , we use eqs .( [ b ] ) and ( [ bprime ] ) to compute the cm describing alice and bob s total state .it is simple to derive the following expression{cccc}% ( \mu+1)\mathbf{i } & & \phi\mathbf{z } & \\ & ( \mu+1)\mathbf{i } & & \phi\mathbf{z}\\ \phi\mathbf{z } & & \lambda\mathbf{i } & ( 1-\tau)\mathbf{g}\\ & \phi\mathbf{z } & ( 1-\tau)\mathbf{g } & \lambda\mathbf{i}% \end{array } \right ) , \label{vtot}%\ ] ] where is the classical gaussian modulation , while and are defined in eqs . ( [ lambda ] ) and ( [ la2 ] ) . in the no - switching protocol, bob performs heterodyne detections measuring both quadratures and . from the form of the attack, we have that the variances in and , relative to both bob s modes and , are identical and given by , with specified in eq .( [ lambda ] ) .the conditional variances , after alice s heterodyne detections , are given by accounting for the double use of the channel within the block , we derive the mutual information taking the limit of large modulation ( ) , one gets the asymptotic expression of the mutual information , given in eq .( [ iab3 ] ) of the main text , i.e., the eb representation and dilation of the two - mode channel allows us to describe the joint alice - bob - eve output state as pure . noting that this quantum state is always processed by rank- measurements , one has that the purity is also preserved on the conditional state after detection .the eavesdropper is assumed to control the quantum memory storing her ancillary modes , she is computationally unbounded , but the parties exchange an infinite number of signals , . in thisregime eve s accessible information on bob s variables is bounded by the holevo quantity .it can be obtained from the von neumann entropy of alice - bob total state , and the conditional von neumann entropy .the holevo bound is then given by we need to derive the function in terms of the relevant parameters of the protocol , , , and .we then compute the symplectic spectrum of the total cm given by eq .( [ vtot ] ) , from the absolute value of the eigenvalues of the matrix , where is the ( four modes ) symplectic form . for large , one obtains the following expressions which , together with eq .( [ s - gen ] ) and eq .( [ limh ] ) , are used to calculate the total von neumann entropy given in eq .( [ vonqqq ] ) .now , the conditional cm , providing the conditional von neumann entropy , is obtained via heterodyning bob s modes and .we apply the formula for heterodyne detection to the total cm . after some algebra, can be written in the following form{cccc}% k & & \tilde{k } & \\ & k^{\prime } & & \tilde{k}^{\prime}\\ \tilde{k } & & k & \\ & \tilde{k}^{\prime } & & k^{\prime}% \end{array } \right ) , \label{vc - noswitch}%\ ] ] with the matrix entries defined as+(\lambda + 1)\tau,\\ \tilde{k } & : = -g(1-\tau)\tau\mu(\mu+2),\\ \tilde{\lambda } & : = \lambda-\tau,\\ k^{\prime } & : = k(g\rightarrow g^{\prime}),\\ \tilde{k}^{\prime } & : = \tilde{k}(g\rightarrow g^{\prime}).\end{aligned}\ ] ] for large , the symplectic spectrum of the conditional cm is given by eq .( [ spectrum - cond - het2 ] ) .note that this spectrum does not depend on the modulation , and for we recover the conditional eigenvalues of ref . .now , from eq .( [ spectrum - cond - het2 ] ) , we derive the conditional von neumann entropy given in eq .( [ total - vonneumannhethet ] ) .combining the computed entropies , we obtain the holevo bound in eq .( [ holevo ] ) .finally , including the mutual information of eq .( [ iabapp ] ) , we derive the asymptotic key rate^{2}}\label{rate - formula - het2}\\ & + \sum_{k=\pm}\left [ h(\bar{\nu}_{k})-h(\nu_{k})\right ] .\nonumber\end{aligned}\ ] ] more precisely , for channel use , we find as given in eq .( [ ratehet2 ] ) .from the first - order derivatives and , and solving the equation , one finds a single critical point for any and ; this is given by the origin of the correlation plane ( , ) , bounded by the constraints given by eq .( [ const ] ) .we then take the second - order derivative , with respect to and , and build the ( symmetric ) hessian matrix{cc}% \partial_{g}^{2}r & \partial_{gg^{\prime}}^{2}r\\ \partial_{g^{\prime}g}^{2}r & \partial_{g^{\prime}}^{2}r \end{array } \right ) .\label{hessian - matrix}%\ ] ] from the positive definiteness of this matrix , evaluated in the critical point , one has that is an absolute minimum .we then study the sign , in , of the determinant of the hessian matrix ( [ hessian - matrix ] ) .after some algebra one can write it in the simplified form \bar{\lambda } \omega(\omega^{2}-1 ) } \label{hhethet}%\ ] ] where we have defined , \\d_{2 } & : = \omega\left [ f\left ( \tau\bar{\lambda}^{-1}\right ) + \tau ^{2}\log_{2}\frac{\bar{\lambda}+\tau}{\bar{\lambda}+\tau-2}\right ] , \\ \bar{\lambda } & : = 1+\omega(1-\tau).\end{aligned}\ ] ] one can check that , and for any and . indeed , being both attenuation and noise positive quantities , as well as , we have we then proceed with the study of the second - order derivative at the critical point .this is the first principal minor of the hessian matrix of eq .( [ hessian - matrix ] ) .it is easy to check the following chain of inequalities therefore , the extremal point is an absolute minimum for the key rate of the no - switching protocol .by contrast , we notice that the study described above is only valid for the pairs ( ) for which it is possible to define the derivatives , i.e. , those lying within the domain bounded by the constraints of eq .( [ const ] ) . in order to complete our analysiswe check that also the points at the boundary of the domain , described by eq .( [ const ] ) , give a key rate which is larger than that one obtained for .we have studied numerically these cases , computing the rate for the pairs fulfilling the condition . in fig .[ rate - boundaries ] we show an example of this computation , corresponding to the case of a transmissivity and thermal noise , in shot - noise unit ( snu ) .we see that the rate for single - mode collective attack ( red spot ) lies well below the blue points , which describe the key rate for the boundary two - mode attackes .the colored region gives the values of the key rate for any non - zero correlations .clearly , similar results are obtained for any other value of and , with the area describing two - mode attacks vanishing into a point as . in that case , the only possible attack is single - mode and , according to eq .( [ const ] ) , we have .in this section , we analyze the key rate and its critical point for the switching protocol . we arrive at the same conclusion obtained for the no - switching protocol . in this case bobperforms homodyne detections on the received signals modes , by randomly switching the quadratures measured . within each block , bob can decide to apply the same homodyne detection on both modes , or measure on two distinct bases ( and ) .here we assume the former case .when bob detects both his modes in quadrature , we have}\nonumber\\ & \times\left ( \begin{array } [ c]{cccc}% 2g^{2}(1-\tau)^{2}-\tilde{\lambda}^{2 } & & g(1-\tau)\tilde{\lambda } & \\ & 1 & & \\ g(1-\tau)\tilde{\lambda } & & \tilde{\lambda}^{2 } & \\ & & & 1 \end{array } \right ) , \end{aligned}\ ] ] where .when bob detects both his modes in quadrature , we obtain}\nonumber\\ & \times\left ( \begin{array } [ c]{cccc}% 1 & & & \\ & 2g^{\prime2}(1-\tau)^{2}-\tilde{\lambda}^{2 } & & g^{\prime}(1-\tau ) \tilde{\lambda}\\ & & 1 & \\ & g^{\prime}(1-\tau)\tilde{\lambda } & & \tilde{\lambda}^{2}% \end{array } \right ) .\end{aligned}\ ] ] in the first case ( -detection ) , for large , we obtain the following symplectic spectrum which depends on the correlation parameter . in the second case ( -detection ), we have the following symplectic eigenvalues depending on correlation parameter . from eqs .( [ spectr - cond1 ] ) and ( [ spectr - cond2 ] ) , we compute two distinct conditional von neumann entropies, and to the conditional von neumann entropy , we average over these two cases , getting the expression using the total von neumann entropy of eq .( [ vonqqq ] ) , the conditional entropy of eq .( [ cond - vonneumannhethom ] ) , and the asymptotic expression of the mutual information for the switching protocol we compute the following expression of the key - rate against gaussian two - mode coherent attacks}-\frac{h\left ( \nu_{+}\right ) + h\left ( \nu _ { -}\right ) } { 2 } , \label{formula - rate - het - hom}%\ ] ] from which we can recover the standard case of single - mode collective attack setting . for the sake of completeness, here we also discuss the case where bob applies different homodyne detections ( one in , the other in ) , within each two - mode block . in this case onefinds a lower key rate because measurements have the effect of de - correlating modes and . as a result ,any dependency on is cancelled from the conditional cm , and for one finds the following doubly degenerate eigenvalues after some algebra we obtain the following non - optimal key rate}% -\frac{h\left ( \nu_{+}\right ) + h\left ( \nu_{-}\right ) } { 2},\ ] ] which is not interesting from a practical point of view , because the parties can always choose to group instances of the protocol with the same quadrature homodyned .we then compute the first derivatives of the rate in eq .( [ formula - rate - het - hom ] ) , with respect to the correlations parameters and , obtaining the following \label{dratedg}\\ \partial_{g^{\prime}}\tilde{r } & = \frac{\zeta^{\prime}}{4}\left [ f(\nu _ { -}^{-1})+\frac{g^{\prime}}{(\omega+g^{\prime})\nu_{-}}-\frac{\nu_{+}\nu _ { -}f(\nu_{+}^{-1})}{(\omega+g^{\prime})(\omega - g)}\right ] , \label{dratedgp}%\end{aligned}\ ] ] where the function has been defined in eq .( [ func - f ] ) , and the symplectic eigenvalues are given in eqs .( [ spectrumtot1 ] ) and ( [ spectrumtot2 ] ) , while we defined and as follows note that these derivatives are properly defined within the constraints of eq .( [ const ] ) , that identify a sector of ( )- plane for which the conditions and must hold .in fact , the situation for which one has can only be obtained in , i.e. , if the attack is collective .solving the system of equations one finds that is a critical point , and that it is also unique for any and and fulfilling eq .( [ const ] ) .the second - order derivatives with respect , evaluated in , is given by \nonumber\\ & + \frac{1}{8}\left [ \frac{\sqrt{\kappa_{+}}f(\nu_{+}^{-1})}{\omega+g}% -\frac{\sqrt{\kappa_{-}}f(\nu_{-}^{-1})}{\omega - g}\right]\end{aligned}\ ] ] with the coefficients defined as follows the derivative with respect to and the mixed derivatives are given by the expressions \nonumber\\ & + \frac{1}{8}\left [ \frac{f(\nu_{+}^{-1})}{\sqrt{\kappa_{+}}(\omega + g^{\prime})}-\frac{f(\nu_{-}^{-1})}{\sqrt{\kappa_{-}}(\omega - g^{\prime}% ) } \right ] \\\partial_{g , g^{\prime}}^{2}\tilde{r } & = \partial_{g^{\prime},g}^{2}\tilde { r}\nonumber\\ & = \frac{1}{8}\left [ \frac{1}{\nu_{+}^{2}-1}-\frac{1}{\nu_{-}^{2}-1}% + \frac{f(\nu_{+}^{-1})}{\nu_{+}}-\frac{f(\nu_{-}^{-1})}{\nu_{-}}\right ] , \end{aligned}\ ] ] which evaluated in , give we then compute the determinant of the hessian in , obtaining the following expression which is always positive because we have also checked that in the limit of .finally we have verified that the second - order derivative of eq .( [ d2rp0 ] ) is positive in .in fact , for , one always has therefore , is a point of absolute minimum for the key - rate of eq .( [ formula - rate - het - hom ] ) , so that eq .( [ r - rcoll ] ) is also verified for the switching protocol .g. spedalieri _et al . _ ,spie security + defence 2015 conference on quantum information science and technology ( toulouse , france , 21 - 24 september 2015 ) .paper * 96480z*. see also https://arxiv.org/abs/1509.01113 ( 2015 ) .
|
we investigate the asymptotic security of one - way continuous variable quantum key distribution against gaussian two - mode coherent attacks . the one - way protocol is implemented by arranging the channel uses in two - mode blocks . by applying symmetric random permutations over these blocks , the security analysis is in fact reduced to study two - mode coherent attacks and , in particular , gaussian ones , due to the extremality of gaussian states . we explicitly show that the use of two - mode gaussian correlations by an eavesdropper leads to asymptotic secret key rates which are _ strictly _ larger than the rate obtained under standard single - mode gaussian attacks .
|
aliency detection , which is to predict where human looks in the image , has attracted a lot of research interests in recent years .it serves as an important pre - processing step in many problems such as image classification , image retargeting and object recognition . unlike rgb saliency detection which receives much research attention , there are not many exploration on rgbd cases .the recently emerged sensing technologies , such as time - of - flight sensor and microsoft kinect , provides excellent ability and flexibility to capture rgbd image .detecting rgbd saliency becomes essential for many applications such as 3d content surveillance , retrieval , and image recognition . in this paper , we focus on how to integrate rgb and the additional depth information for rgbd saliency detection .+ [ fig : intro ] according to how saliency is defined , saliency detection methods can be classified into two categories : top - down approach and bottom - up approach .top - down saliency detection is task - dependent that incorporates high level features to locate the salient object . on the other hand , bottom - up approachis task - free , and it utilizes low level features that are biologically motivated to estimate salient regions .most of the existing bottom - up saliency detection methods focus on designing different low - level cues to represent salient objects .the saliency maps of these low - level features are then fused to become a master saliency map .as human attention are preferentially attracted by the high contrast regions with their surrounding , contrast - based features ( like the color , edge orientation or texture contrasts ) make a crucial role to derive the salient objects .background and color compactness priors consider salient object in different perspectives .the first one leverages the fact that most of the salient objects are far from image boundaries , the latter one utilizes the color compactness of the salient object .in addition to rgb information , depth has been shown to be one of the practical cue to extract saliency .most existing approaches for 3d saliency detection either treat the depth map as an indicator to weight the rgb saliency map or consider depth cues as an independent image channel . notwithstandingthe demonstrated success of these features , whether these features complement to each other remains a question .the interaction mechanism of different saliency features is not well explored , and it is not clear how to integrate 2d saliency features with depth - induced saliency feature in a better way .linearly combining the saliency maps produced by these features can not guarantee better result ( as shown in figure [ fig : intro : g ] ) .some other more complex combination algorithms have been proposed in ._ propose a multi - layer cellular automata ( mca , a bayesian framework ) to merge different saliency maps by taking advantage of the superiority of each saliency detection methods .recently , several heuristic algorithms are designed to combine the 2d related saliency maps and depth - induced saliency map . however , as restricted by the computed saliency values , these saliency map combination methods are not able to correct wrongly estimated salient regions .for example in figure [ fig : intro ] , heuristic based algorithms ( figure [ fig : intro : d ] to [ fig : intro : f ] ) can not detect the salient object correctly . adopting these saliency maps for further fusion , neither simple linear fusion ( figure [ fig : intro : g ] ) nor mca integration ( figure [ fig : intro : h ] ) are able to recover the salient object .we wonder whether a good integration can address this problem by further adopting convolutional neural network technique to train a saliency map integration model . the resulted image shown in figure [ fig : intro : i ]indicates that saliency map integration is hugely influenced by the quality of the input saliency maps . based on the these observations ,we take one step back to handle more raw and flexible saliency features . in this paper, we propose a deep fusion framework for rgbd saliency detection .the proposed method takes advantage of the representation learning power of cnn to extract the hyper - feature by fusing different hand - designed saliency features to detect salient object ( as shown in figure [ fig : intro : j ] ) .we first compute several feature vectors from original rgbd image , which include local and global contrast , background prior , and color compactness .we then propose a cnn architecture to incorporate these regional feature vectors into a more representative and unified features .compared with feeding raw image pixels , these extracted saliency features are well - designed and they can guide the learning of cnn towards saliency - optimized more effectively . asthe resulted saliency map may suffer from local inconsistency and noisy false positive , we further integrate a laplacian propagation framework with the proposed cnn .this approach propagates high confidence saliency to the other regions by taking account of the color and depth consistency and the intrinsic structure of the input image , which is able to remove noisy values and produce smooth saliency map .the laplacian propagation is solved with fast convergence by the adoption of conjugate gradient and preconditioner .experimental evaluations demonstrate that , once our deep fusion framework are properly trained , it generalizes well to different datasets without any additional training and outperforms the state - of - the - art approaches .the main contributions of this paper are summarized as follows .we propose a simple yet effective deep learning model to explore the interaction mechanism of rgb and depth - induced saliency features for rgbd saliency detection .this deep model is able to generate representative and discriminative hyper - features automatically rather than hand - designing heuristical features for saliency .we adopt laplacian propagation to refine the resulted saliency map and solve it with fast convergence .different from crf model , our laplacian propagation not only considers the spatial consistency but also exploits the intrinsic structure of the input image .extensive experiments further demonstrate that this proposed laplacian propagation is able to refine the saliency maps of existing approaches , which can be widely adopted as a post processing step .we investigate the limitations of saliency map integration , and demonstrate that simple features fusion are able to obtain superior performance .in this section , we give a brief survey and review of rgb and rgbd saliency detection methods , respectively .comprehensive literature reviews on these saliency detection methods can be found in .* rgb saliency detection : * as suggested by the studies of cognitive science , bottom - up saliency is driven by low - level stimulus features . this concept is also adopted in computer vision to model saliency .contrast - based cues , especially color contrast , are the most widely adopted features in previous works .these contrast - based methods can be roughly classified into two categories : local and global approaches .local method calculates color , edge orientation or texture contrast of a pixel / region with respect to a local window to measure saliency . in , they develop an early local based visual saliency detection method by computing center surrounding differences across multi - scale image features to estimate saliency ._ propose to apply sparse representation on local image patches .however , based only on local contrast , these methods may highlight the boundaries of salient object and be sensitive to high frequency content .in contrast to local approach , the global approach measures salient region by estimating the contrast over the entire image ._ model saliency by computing color difference to the mean image color .et al . _ propose a histogram - based global contrast saliency method by considering the spatial weighted coherence .although these global methods achieve superior performances , they may suffer from distractions when background shares similar color to the salient object .background and color compactness priors are proposed as a complement to contrast - based methods .these methods are built on strong assumptions , which may invalid in some scenarios . as each feature has different strengths ,some works focus on designing the integration mechanism for different saliency features .et al . _ use crf to integrate three different features from both local and global point of views .et al . _ propose a hierarchical framework to integrate saliency maps in different scales , which can handle small high contrast regions well . unlike these methods that directly combine the saliency maps obtained from different saliency cues , the proposed method records low - level saliency feature in vector forms andjointly learns the interaction mechanism to become a hyper - feature with cnn .similar to the proposed method , cnn has been adopted in some other works to extract hierarchical feature representations for detecting salient regions .in contrast to most of these deep networks that take raw image pixels as input , the proposed method aims at designing a unified cnn framework to learn the interaction mechanism of different saliency cues .* rgbd saliency detection : * unlike rgb saliency detection , rgbd saliency receives less research attention .et al . _ propose an early computational model on depth - based attention by measuring disparity , flow and motion .similar to color contrast , zhang _ et al ._ design a stereoscopic visual attention algorithm based on depth and motion contrast for 3d video .et al . _ estimate saliency regions by fusing the saliency maps produced by appearance and depth cues independently .these methods either treat the depth map as an indicator to weight the rgb saliency map or consider depth map as an independent image channel for saliency detection . on the other hand , peng _ et al . _ propose a multi - stage rgbd model to combine both depth and appearance cues to detect saliency .et al . _ integrate the normalized depth prior and the surface orientation prior with rgb saliency cues directly for the rgbd saliency detection .these methods combine the depth - induced saliency map with rgb saliency map either directly or in a hierarchy way to calculate the final rgbd saliency map . however , these saliency map level integration is not optimal as it is restricted by the determined saliency values . on the contrary ,we incorporate different saliency cues and fuse them with cnn in feature level .as shown in figure [ fig : convnet ] , the proposed deep fusion framework for rgbd salient object detection composes of three modules .the first module generates various saliency feature vectors for each superpixel region .the second module is to extract hyper - feature representation from the obtained saliency feature vectors .the third module is the laplacian propagation framework which helps to detect a spatially consistent saliency map .given an image , we aim to represent saliency by some demonstrated effective saliency features .figure [ fig : saliency_cue ] gives an illustration on the proposed saliency feature extraction .we first segment the image into _n _ superpixels using slic method . given a rgb image , we denote the segmented _n _ regions as . for each superpixel , we denote the calculated saliency features as a vector . in the following ,we will take region ( the region that marked in orange in figure [ fig : saliency_cue ] ) as an example to show how we calculate different saliency feature vectors . different from the classical saliency detection methods that directly calculate the saliency values for each superpixel , we record the saliency features for each image region and no further operationis performed to make saliency features as raw as possible . for region , there are seven types of feature vectors : , where and represent color and depth information respectively , indicates that saliency is determined in the local scope and indicates the global scope , and represent the background and color compactness priors respectively .more specifically , the color based feature vectors are recorded in the following formula , and the depth based feature vectors are defined similarly .we compute the color - based features in color space .the local color contrast is calculated as : where is the total number of pixels in region , and a larger superpixel contributes more to the saliency . and are the mean color values of the region and . is used to control the spatial influential distance .this weight is defined as , and and are the centers of corresponding regions . in our experiment , the parameter = 0.15 is set to make the neighbors have higher influence on the calculated contrast values , while the influence of other regions are negligible .similar to the local color contrast vector , the global color contrast vector is defined as , the difference between the global contrast and local contrast lies in the spatial weight , where in the global contrast the parameter is set to 0.45 to cover the entire image . likewise , the depth contrast between region and region can be calculated as in eq .[ depth_local ] and eq .[ depth_global ] . where and are the mean depth values of the region and respectively . generally speaking , the colors of an object are compacted together whereas the colors belong to the background are widely distributed in the entire image . the element in the color compactness based feature vector is calculated as following . where the function is used to calculate the similarity of two colors and , and is defined as . defines the weighted mean position of color .the parameter is set to 20 in our implementation .we omit the depth compactness prior in our method since the depth map contains only dozens of depth levels and their spatial distributions can be very random .the experiment results also show that whether adding the depth compactness or not does not affect the final results too much . beside color compactnessprior , we further introduce the background prior , which leverages the fact that salient object is less possible to be arranged to close to the image boundaries .we first extract regions along the image boundary as pseudo - background regions .then the color or depth contrast to the pseudo - background regions will be calculated similar to eq .[ color_contrast_global ] and eq .[ depth_global ] . in our experiment ,the number of superpixels is set to 1024 and is set to 160 .given the obtained saliency feature vectors , we then propose a cnn architecture to automatically incorporate them into unified and representative features .we formulate saliency detection as a binary logistic regression problem , which takes a patch as input and output the probabilities of two classes .our cnn takes an input of size , and generates a prediction as saliency output . for each superpixel , all the seven saliency feature vectors are integrated into a multiple channel image as follows : \(1 ) reshape the length vector ( , , , and ) to size to form the first five channels , respectively ; \(2 ) perform zero padding to the length vector and to length and then concatenate and reshape them into size to form the sixth channel .as shown in figure [ fig : convnet ] , our network consists of three convolutional layers followed by a fully connected layer and a logistic regression output layer with sigmoid nonlinear function .following the first and second convolutional layers , we add an average pooling layer for translation invariance .we adopt the sigmoid function as the nonlinear mapping function for the three convolutional layers , while rectified linear unites ( relus ) is applied in the last two layers .dropout procedure is applied after the first fully connected layers to avoid overfitting . for simplification ,we use and to indicate the convolutional layer and the fully connected layer with output and kernel size . indicates the pooling layer with type and kernel size . and represent the sigmoid function and relus .then the architecture of our cnn can be described as .this proposed cnn was trained with back - propagation using stochastic gradient descent ( sgd ) .as saliency values are estimated for each superpixel individually , the proposed cnn in section [ title_3_2 ] may fail to retain the spatial consistency and lead to noisy output .figure [ fig : init_refine : c ] shows two examples of the saliency maps produced by our cnn for rgbd image .it indicates that our cnn omits some salient regions and wrongly detects some background regions as salient . despite these misdetected regions , most of the regions with high probability to be salient are correct , robust , and reliable .the same situation also occurs for non - salient probability in the background ( figure [ fig : init_refine : d ] ) . as a consequence ,these high confident regions are used as guidance , and they are employed in a laplacian propagation framework to obtain a more spatially consistent saliency map .the key of the laplacian propagation lies in propagating the saliency from the regions with high probability to those ambiguous regions by considering two criteria : ( 1 ) neighboring regions are more likely to have similar saliency values ; and ( 2 ) regions within the same manifold are more likely to have similar saliency values .+ + [ fig : init_refine ] given a set of superpixels of an input image and a label set , we denote the salient and non - salient probability generated by the proposed cnn as and .the superpixels in are labeled as 1 if , or as 2 if .the goal of laplacian propagation is to predict the labels of the remaining regions .let ^t} ] with if region is labeled as , otherwise .we further adopt the color and depth information to form the affinity matrix {n \times n}}$ ] : where the first term defines the color distance of superpixel region and , and the second term defines the relative depth distance .most of the elements of the affinity matrix are zero except for those neighbouring and pairs . in order to better leverage the local smoothness ,we use a two - hierarchy neighboring connection model , i.e. , each region is not only connected to its neighboring regions but also connected to the regions that share the same boundaries with its neighboring regions .we set to avoid self - reinforcement .then the laplacian propagation can be formulated to solve the following optimization functions : where parameter controls the balance between the smoothness constraint ( the first term ) and the fitting constraint ( the second term ) . is the element of the degree matrix derived from affinity matrix , and .this designed smoothness constraint not only considers local smoothness but also confines the regions within the same manifold to have the same label by constructing a smooth classifying function .this classifying function can change sufficiently slow along the coherent structure revealed by the original image .this optimization function eq .[ sal_optimization ] can be solved using an iteration algorithm as shown in , or it can be reformulated into a linear system . for efficiency , we set the derivative of the to zero and the optimal solution of eq .[ sal_optimization ] can be obtained by solving the following linear equation : where is an identity matrix and .we further adopt conjugate gradient and preconditioner to solve this linear equation for fast convergence . after propagating from the high probability salient and non - salient regions ,the final saliency map is normalized to [ 0,1 ] and it is denoted as .two examples of the proposed propagation are shown in figure [ fig : init_refine ] . those wrongly estimated regions in figure [ fig : init_refine : b ] and figure [ fig : init_refine : c ]are corrected in the final saliency maps produced by the laplacian propagation . in our implementation , parameters and adaptively determined by otsu method .+ + + + + + + + +in this section , we evaluate the proposed method on three datasets , nlpr rgbd salient dataset , njuds2000 stereo datast , and lfsd dataset . * nlpr dataset . * the nlpr rgbd salient dataset contains 1000 images captured by microsoft kinect in different indoor and outdoor scenarios .we split this dataset into two part randomly : 750 for training and 250 for testing . *njuds2000 dataset . *the njuds2000 dataset contains 2000 stereo images , as well as the corresponding depth maps and manually labeled groundtruth .the depth maps are generated using an optical flow method .we also split this dataset into two part randomly : 1000 for training and 1000 for testing .* lfsd dataset . *the lfsd dataset contains 100 images with depth information and manually labeled groundtruth .the depth information are captured with lytro light field camera .all the images in this dataset are for testing. * evaluation metrics . *we compute the precision - recall ( pr ) curve , mean of average precision and recall , and f - measure score to evaluate the performance of different saliency detection methods .the pr curve indicates the mean precision and recall of the saliency map at different thresholds .the f - measure is defined as , where is set to 0.3 .we use the randomly sampled 750 training images of nlpr dataset and the randomly sampled 1000 training images of njuds2000 dataset to train our deep learning framework .these randomly selected training dateset covers more than 1000 kinds of common objects under different circumstances .the remaining nlpr , njuds2000 , and lfsd datesets are used to verify the generalization of the proposed method .the proposed method is implemented using matlab .we set the momentum in our network to 0.9 and the weight decay to be 0.0005 .the learning rate of our network is gradually decreased from 1 to 0.001 . due to the `` data - hungry '' nature of cnn ,the existing training data is insufficient for training , in addition to the dropout procedure , we also employed data augmentation to enrich our training dataset . similar to , we adopted two different image augmentation operations , the first one consists of image translations and horizontal flipping and the other is to alter the intensities of the rgb channels .these data augmentations greatly enlarge our training dataset and make it possible for us to train the proposed cnn without overfitting .it took around days for our training to converge .ours + nlpr test set & 0.5141 & 0.5634 & 0.6049 & 0.6335 & 0.6519 & 0.5448 & 0.7184 & * 0.7823 * + njud test set & 0.6096 & 0.6133 & 0.6156 & 0.6791 & 0.6381 & 0.6952 & 0.7246 & * 0.7874 * + lfsd dataset & 0.6982 & 0.7311 & 0.7029 & 0.7384 & 0.7041 & 0.7567 & 0.7877 & * 0.8439 * + in this section , we compare our method with four state - of - the - art methods designed for rgb image ( s - cnn , bsca , mb+ , and legs ) , and three rgbd saliency methods designed specially for rgbd image ( lmh , acsd , and gp ) .the results of these different methods are either provided by authors or achieved using the publicly available source codes .the qualitative comparisons of different methods on different scenes are shown in figure [ fig : saliency2 ] . as can be seen in the first and fifth rows of figure [ fig : saliency2 ], the salient object has a high color contrast with the background , as thus rgb saliency methods are able to detect salient object correctly .however , when the salient object shares similar color with the background , e.g. , sixth , seventh , and eighth rows in figure [ fig : saliency2 ] , it is difficult for existing rgb models to extract saliency . with the help of depth information , salient object can be easily detected by the proposed rgbd method .figure [ fig : saliency2 ] also shows that the proposed method consistently outperforms all the other rgbd saliency methods ( lmh , acsd , and gp ) .the quantitative comparisons on nlpr , njuds2000 , and lfsd dataset are shown in figure [ fig : saliency_qut ] and table [ table : belta ] .figure [ fig : saliency_qut ] and table [ table : belta ] show that the proposed method performs favorably against the existing algorithms with higher precision , recall values and f - measure scores on all the three datasets . for the nlpr dataset ,it is challenging as most of the salient object share similar color to the background . as a consequence ,rgb saliency methods perform relative worse than rgbd saliency methods in terms of precision . by providing accurate depth map ( nlpr dataset ) ,lmh and gp methods perform well in both precision and recall. however , they performs not well when tested on the njuds2000 dataset and lfsd dataset .this is because these two datasets provide only the rough depth information ( calculated from stereo images or using light field camera ) , lmh and gp can only detect a small fraction of the salient objects ( high precision but with low recall ) .acsd works worse when the salient object lies in the same plane with the background , e.g. , the third row in the figure [ fig : saliency2 ] , and the bad quantitative results on the nlpr dataset .both qualitative and quantitative results show that the proposed method performs better in terms of accuracy and robustness than the compared methods with rgbd input images . & lf & crf & mca & cnn - f & lf & crf & mca & cnn - f & lmh & gp & + & 0.393 & 0.2991 & 0.3713 & 0.4667 & 0.7020 & 0.698 & 0.7017 & 0.6921 & 0.6519 & * 0.718 & + yes & * 0.536 & * 0.398 & * 0.486 & * 0.597 & * 0.711 & * 0.739 & * 0.7623 & * 0.737 & * 0.665&0.7111&*0.7823 * + * * * * * * * * * * & lf & crf & mca & cnn - f & lf & crf & mca & cnn - f & lmh & gp & + & 0.437 & 0.450 & 0.458 & 0.644 & 0.675 & 0.671 & 0.7376 & 0.7319 & 0.6381 & * 0.7246 & + yes & * 0.605 & * 0.609 & * 0.632 & * 0.731 & * 0.698 & * 0.741 & * 0.742 & * 0.7423 & * 0.6810&0.7179&*0.7874 * + * * * * * * * * * * & lf & crf & mca & cnn - f & lf & crf & mca & cnn - f & lmh & gp & + & 0.461 & 0.436 & 0.558 & 0.672 & 0.723 & 0.771 & 0.8071 & 0.706 & 0.704 & * 0.7877 & + yes & * 0.616 & * 0.693 & * 0.654 & * 0.757 & * 0.762 & * 0.792 & * 0.802 & * 0.800 & * 0.718 & 0.7830&*0.8439 * + * * * * * * * * * * * saliency maps vs. features . * in here we conduct a series of experiments to analyze the flexibility of the proposed framework and the effectiveness of laplacian propagation . apart from previous heuristic saliency map merging algorithm , we further compare our method with four other saliency map integration methods on three test dataset to show the flexibility of fusing different cues in feature level .these four integration methods are directly linear fusion ( lf ) , fusing in crf , the latest multi - layer cellular automata ( mca ) integration , and a cnn based fusion ( denoted as cnn - f ) . to investigate the importance of saliency map quality , we test these saliency map merging methods on two set of inputs .the first set is from seven saliency maps computed by widely used features ( similar to those seven saliency feature vectors computed in section [ title_3_1 ] ) , and the second set is from more representative sophisticated saliency maps ( obtained using three state - of - the - art rgbd saliency detection methods ) .the original crf fusion framework in is utilized for merging three color based saliency maps . in our implementation, we retrain this crf framework for merging the seven adopted fundamental saliency maps and three sophisticated saliency maps respectively . for cnn - f , we utilize the same cnn architecture as shown in fig .[ fig : saliency_cue ] to perform the cnn based saliency map fusion , i.e. , the same convolutional layers and fully connected layers except the input layer .more specifically , we formulate the saliency map merging as a binary logistic regression problem , which takes several saliency map patches as input ( size for fundamental saliency map merging and for sophisticated saliency map merging ) , and output the probabilities of the pixel being salient and non - salient .cnn - f is trained in patch - wise manner .we collect training samples by cropping patches of size from each saliency map using sliding window .we label a patch as salient if the central pixel is salient or 75% pixels in this patch are salient , otherwise it is labeled as non - salient .this cnn - f is trained on the cropped patches of the nlpr training set and njuds2000 training set .the relevant comparison results of our proposed methods with these saliency map merging methods are shown in figure [ fig : sal_map ] , table [ table : anlysis_nlpr ] , table [ table : anlysis ] , and table [ table : anlysis_lfsd ] .`` fundamental fusion '' represents the results of four merging methods performed on seven fundamental saliency maps .`` heuristic fusion '' gives the results of two state - of - the - art heuristic saliency map merging methods , while `` sophisticated fusion '' gives the results of four merging methods performed on three sophisticated saliency maps ( calculated from the existing state - of - the - art rgbd saliency detection methods lmh , acsd , and gp ) .for `` fundamental fusion '' in table [ table : anlysis_lfsd ] , all the existing saliency map merging methods ( including deep learning framework ) can not achieve satisfactory performance .even though feeding with the state - of - the - art sophisticated saliency maps , these saliency merging methods still perform worse than our saliency feature fusion without lp framework ( 0.8071 vs 0.8157 ) , which further validates the flexibility of our feature level fusion .note that 0.8157 are obtained from our initial saliency feature fusion network , which performs only on the pixel level and without considering spatial consistency .our model achieves superior performance even though the input features are very simple ( similar to the features used in `` fundamental fusion '' ) .compared to those methods using similar features ( in `` fundamental fusion '' ) , we can observe that fusing features is much more flexible than fusing saliency map .+ + + + + + * analysis of laplacian propagation .* we then evaluate the effective of the proposed laplacian propagation , and the optimized results of the existing methods using laplacian propagation .the f - measure scores of our rgbd method without laplacian propagation on three test dataset are shown in blue in table [ table : anlysis_nlpr ] , table [ table : anlysis ] , and table [ table : anlysis_lfsd ] .these learned hyper - features still outperform the state - of - the - art approaches , while with lp we achieve almost 0.79 , 0.79 , and 0.84 f - measures .figure [ fig : sal_lp ] shows some examples of the optimized results of the existing methods ( lmh , acsd , and gp ) using laplacian propagation .these quantitative and qualitative experimental evaluations further demonstrate that the proposed laplacian propagation is able to refine the saliency maps of existing methods , which can be widely adopted as a post processing step. + + + + + + + * failure cases .* figure [ fig : saliencyrgbvsrgbd ] gives more visual results and some failure cases of our proposed method on rgbd images .compared with the these two pictures , we can find that depth information is more helpful when the salient objects have high depth contrast with background or lie closer to the camera .our method may fail when the salient object shares a very similar color and depth information with the background .in this paper , we propose a novel rgbd saliency detection method .our framework consists of three different modules .the first module generates various low level saliency feature vectors from the input image .the second module learns the interaction mechanism of rgb saliency features and depth - induced features and produces hyper - feature using cnn .feeding with these hand - designed features can guide the learning process of cnn towards saliency - optimized . in the third module, we integrate a laplacian propagation framework with cnn to obtain a spatially consistent saliency map .both quantitative and qualitative experiment results show that the fused rgbd hyper - feature outperforms all the state - of - the - art methods .we demonstrated that an optimized fusion leads to superior performance , and this flexible hyper - feature extraction framework can be further extended by including more saliency cues ( e.g. , flash cue ) .we aim to explore a deeper and more effective fusion network and extend it to other applications in our future work .d. banica and c. sminchisescu , `` second - order constrained parametric proposals and sequential search - based structured prediction for semantic segmentation in rgb - d images , '' in _ cvpr _ , 2015 , pp .35173526 .liangqiong qu received the b.s .degree in automation from central south university , china , in 2011 .she is currently a joint ph.d .student of university of chinese academy of sciences and city university of hong kong .her research interests include illumination modeling , image processing , saliency detection and deep learning .shengfeng he obtained his b.sc .degree and m.sc .degree from macau university of science and technology , and the ph.d degree from city university of hong kong .he is currently a research fellow at city university of hong kong .his research interests include computer vision , image processing , computer graphics , and deep learning .jiawei zhang received his beng degree in electronic information engineering from the university of science and technology of china in 2011 and master degree in institute of acoustics , chinese academy of sciences in 2014 .he is currently a computer science phd student in city university of hong kong .jiandong tian received his b.s .degree in automation at heilongjiang university , china , in 2005 .he received his ph.d .degree in pattern recognition and intelligent system at chinese academy of sciences , china , in 2011 .he is currently an asassociate professor in computer vision at state key laboratory of robotic , shenyang institute of automation , chinese academy of sciences .his research interests include pattern recognition and robot vision .yandong tang received b.s . anddegrees in the department of mathematics , shandong university in 1984 and 1987 . in 2002he received the doctor s degree in applied mathematics from the university of bremen , germany .currently he is a professor in shenyang institute of automation , chinese academy of sciences .his research interests include robot vision , pattern recognition and numerical computation .qingxiong yang received the b.e .degree in electronic engineering and information science from the university of science and technology of china , hefei , china , in 2004 , and the ph.d .degree in electrical and computer engineering from the university of illinois at urbana - champaign , champaign , il , usa , in 2010 .he is currently an assistant professor with the department of computer science , city university of hong kong , hong kong .his research interests reside in computer vision and computer graphics .he was a recipient of the best student paper award at the 2010 international workshop on multimedia signal processing and the best demo award at the 2007 ieee computer society conference on computer vision and pattern recognition .
|
numerous efforts have been made to design different low level saliency cues for the rgbd saliency detection , such as color or depth contrast features , background and color compactness priors . however , how these saliency cues interact with each other and how to incorporate these low level saliency cues effectively to generate a master saliency map remain a challenging problem . in this paper , we design a new convolutional neural network ( cnn ) to fuse different low level saliency cues into hierarchical features for automatically detecting salient objects in rgbd images . in contrast to the existing works that directly feed raw image pixels to the cnn , the proposed method takes advantage of the knowledge in traditional saliency detection by adopting various meaningful and well - designed saliency feature vectors as input . this can guide the training of cnn towards detecting salient object more effectively due to the reduced learning ambiguity . we then integrate a laplacian propagation framework with the learned cnn to extract a spatially consistent saliency map by exploiting the intrinsic structure of the input image . extensive quantitative and qualitative experimental evaluations on three datasets demonstrate that the proposed method consistently outperforms state - of - the - art methods . shell : bare demo of ieeetran.cls for ieee journals rgbd saliency detection , convolutional neural network , laplacian propagation .
|
mathematical discovery has long been informed by experimentation and computation .understanding key examples is typically the first step towards formulating theorems and devising proofs .the computer age enables many more potentially intricate examples to be studied than ever before .sometimes , this leads to a fruitful dialog between theory and experiment .other times , this work leads serendipitously to new ideas and theorems .many examples are described in the books .we believe there is much greater potential for computer - aided experimentation than what has been achieved .this is particularly true for scientific discovery , using advanced computing to study subtle phenomena and amass evidence for the mathematical facts which will become the theorems of tomorrow .currently , much computer experimentation is ( often appropriately ) on a fairly small scale .a notable exception is odlyzko s study ( using cray supercomputers ) of the zeroes of riemann s -function on the critical line , which led to a rich data set that has stimulated much intriguing mathematics .a different large scale use of computers is the great internet mersenne prime search ( gimps ) , which searches for primes of the form for a prime , such as , and .volunteers run software on otherwise idle computers to search for mersenne primes .this project has found the largest known primes since it started in 1996 .daily , it uses over 60 gigahertz - years of computation .gimps is a mathematical analog of big - science physics .we feel there is more scope for such investigations in mathematics .we describe our use of a supercomputer to study a conjecture in the real schubert calculus , which may serve as a model for research in mathematics based on computational experiments . rather than odlyzko s cray supercomputers , or gimps s thousands of volunteers ,we use more pedestrian computer resources that are available to many mathematics departments together with modern ( and free ) software tools such as perl , mysql , and php , as well as freely available mathematical software such as singular , macaulay 2 , and sage .this is a methods paper whose purpose is to explain the framework we developed .we do not present mathematical conclusions from this ongoing computational experiment , but instead explain how you , the reader , can take advantage of readily available yet often underutilized computer resources to employ in your mathematical research .to get an idea of the available resources , in its first six months of data acquisition , this experiment used over 350 gigahertz - years of computing primarily on 191 computers in instructional labs that are maintained by the department of mathematics at texas a&m university . when the labs are not in use , the machines become a cluster computing resource that provides over 500 computational cores for a peak performance of 1.971 teraflops with 296 gb of total memory .this experiment uses a supercomputer moonlighting from its day job of calculus instruction .the authors of this note include johnson , who configured the labs as a beowulf cluster , enabling their use for this computation .our software was written and maintained by the remaining authors , who include current and former postdocs and students working with sottile .we are organized into a vertically - integrated team where the senior members work with and mentor the junior members .the overall software design and much of its implementation is due to hillar .a key feature of this experiment is its robustness it can and has recovered from many faults , including emergency system shutdown , database erasure , inexplicable computer malfunction , as well as day - to - day network failures .it is also repeatable , using a pseudorandom number generator with fixed seeds .this repeatability will allow us to rerun a large segment of our calculations on a different supercomputer using different mathematical software than the initial run .this will be an unprecedented test of the efficacy of different implementations of our basic mathematical algorithms of grbner basis computation and real root counting .this experiment is part of a long - term study of a striking conjecture in the real schubert calculus made by boris shapiro and michael shapiro in 1993 .this includes two previous large computational experiments ( and several smaller ones ) , as well as more traditional work , including proofs of the shapiro conjecture for grassmannians .this story was the subject of a current events bulletin lecture at the 2009 ams meeting and a forthcoming article in the ams bulletin .this experiment is possible only because we may model the geometric problems we study on a computer , efficiently solve them , record the results , and automate this process .we describe some background in section [ sec : shapiro ] and the mathematics of the computations in section [ s : math ] . in sections [s : resources][s : quality ] , we explain the resources ( human , hardware , and software ) we utilized , the architecture of the experiment , how we ran it on a cluster , and the measures that we took to maintain the quality of our data .we end with some conclusions and remarks .our goal is to describe the design and execution of a large scale computation , which may serve as a model for other experiments in mathematics .while many aspects of our experiment are universal , the details are specific to the questions we are studying . we give some mathematical background to provide context .some solutions to a system of real polynomial equations are real and the rest occur in complex conjugate pairs . while the structure of the equations determines the total number of solutions , the distribution between the two types depends subtly on the coefficients .surprisingly , sometimes there is additional structure which leads to finer information in terms of upper bounds or lower bounds on the number of real solutions .the shapiro conjecture is the extreme situation of having _ only _ real solutions .we give an example .set , which is a curve .we ask for the finitely many lines that meet four tangent lines to , which we take to be tangent at the points for , and some point .the first three tangents lie on the quadric defined by .we show this in figure [ f : tanquad ] , where is the tangent line at the point .{figures / hyperboloid.eps } } \put(112,3){\red{ } } \put(274,14){\red{ } } \put(3,117){\red{ } } \put(3,93){\blue{ } } \put(248,120){\brown{ } } \end{picture}\vspace{-10pt}\ ] ] these first three tangents lie on one ruling of and the lines in the other ruling are those meeting them .lines meeting all four tangents correspond to the ( two ) _ a priori _ complex points where the fourth tangent meets the quadric . as we see in figure[ f : throat ] , for any , meets the quadric in two real points , giving two real solutions to this instance of the problem of four lines .{figures / shapiro.eps } } \put(-14,51){\red{ } } \put(200,33){\red{ } } \put(167,115){\red{ } } \put(129,67){\brown{ } } \put(38,2){\brown{ } } \put(2,33){{\blue{ } } } \put(-9,95){\forestgreen{ } } \end{picture}\vspace{-10pt}\ ] ] the schubert calculus asks for the linear spaces that have specified positions with respect to other , fixed ( flags of ) linear spaces .for example , what are the 3-planes in meeting 12 given 4-planes non - trivially ?( there are 462 . )the specified positions are a , for example , the schubert problem of lines meeting four lines in 3-space .the fixed linear spaces imposing the conditions give an of the schubert problem , so that the lines , , , and give an instance of the problem of four lines .the number of solutions depends upon the schubert problem , while the solutions depend upon the instance .the shapiro conjecture begins with a rational normal curve , which is any curve projectively equivalent to the moment curve , in 1993 , boris shapiro and michael shapiro conjectured that if the fixed linear spaces osculate a rational normal curve , then all solutions to the schubert problem are real .initially , the statement seemed too strong to be possibly true .this perception changed dramatically after a few computations , leading to a systematic study of the conjecture for grassmannians , both theoretical and experimental in which about 40,000 instances were computed of 11 different schubert problems .several extremely large instances were also verified by others .this early study led to a proof of the shapiro conjecture in a limiting sense for grassmannians and a related result in the quantum cohomology of grassmannians , which drew others to the area .eremenko and gabrielov proved it for grassmannians of codimension 2 subspaces where the shapiro conjecture becomes the statement that a univariate rational function with only real critical points is ( equivalent to ) a quotient of real polynomials .later , mukhin , tarasov , and varchenko used ideas from integrable systems to prove the shapiro conjecture for grassmannians .they later gave a second proof that revealed deep connections between geometry and representation theory .this story was the subject of a current events bulletin lecture at the january 2009 ams meeting and a forthcoming article in the ams bulletin .the shapiro conjecture makes sense for any flag manifold ( compact rational homogeneous space ) .early calculations supported it for orthogonal grassmannians but found counterexamples for general -flag manifolds and lagrangian grassmannians .calculations suggested modifications in these last two cases and limiting versions were proved , and the conjecture for the orthogonal grassmannian wasjust proven by purbhoo .the modification for -flag manifolds , the , was refined and tested in a computational experiment involving ruffo and sottile . that ran on computers at the university of massachusetts , the mathematical sciences research institute , and texas a&m university , using 15.76 gigahertz - years of computing to study over 520 million instances of 1126 different schubert problems on 29 flag manifolds .over 165 million instances of the monotone conjecture were verified , and the investigation discovered many new and interesting phenomena . for flags consisting of a codimension 2 plane lying on a hyperplane , the monotone conjecture is a special case of a statement about real rational functions which eremenko , et .al proved .their work leads to a new conjecture for grassmannians .a flag is of a curve if every subspace in the flag is spanned by its intersections with .the asserts that if the flags in a schubert problem on a grassmannian are in that they are secant along disjoint intervals of a rational normal curve , then every solution is real .it is true for grassmannians of codimension 2 subspaces , by the result of eremenko , et .consider this for the problem of four lines .the hyperboloid in figure [ f : secant ] contains three lines that are secant to along disjoint intervals .{figures / secant.eps } } \thicklines \put(105,50){}\put(115,51){\vector(2,-1){25 } } \put(227,-2){\vector(0,1){77.8 } } \end{picture}\vspace{-10pt}\ ] ] any line secant along the indicated arc ( which is disjoint from the other three intervals ) meets the hyperboloid in two points , giving two real solutions to this instance of the secant conjecture .we are testing the secant conjecture for many schubert problems on small grassmannians of -planes in -space .( see table [ t : number_problems ] . ).schubert problems studied on as of 20 may 2009 . [ cols="^,^,^,^,^",options="header " , ] [ t : sample_table ] the column with overlap number 0 represents tests of the secant conjecture . since its only entries are in the row for 16 real solutions , the secant conjecture was verified in instances .the column labeled 1 is empty because flags for this problem can not have overlap number 1 .the most interesting feature is that for overlap number 2 , all solutions were real , while for overlap numbers 3 , 4 , and 5 , at least 4 of the 16 solutions were real , and only with overlap number 6 and greater does the schubert problem have no real solutions .this inner border , which indicates that the reality of the schubrt problem does not completely fail when there is small overlap , is found on many of the other problems that we investigated and is a new phenomenon that we do not understand .creating the software and managing this computation is a large project . to accomplish it , we formed a vertically - integrated team of graduate students and postdoctoral fellows under the direction of a faculty member and used modern software tools ( perl , mysql , php ) to automate the computation as well as store and visualize data .this software runs on many different computers , but primarily on a supercomputing cluster whose day job is calculus instruction . the authors of this note include johnson , who created and maintains the supercomputer we use , as well as a research team of current and former graduate students and postdoctoral fellows who have worked with sottile . pooling our knowledge in mathematics and in software development , we shared the work of creating and running this project .this structure enabled the senior members to mentor and train the junior members .we modeled our team structure on the working environment in a laboratory .this led to a division of labor and to other collaborations .for example , garca - puente and ruffo wrote the mathematical heart of the computation in a singular library .hillar , who had experience in the software industry , provided the conceptual framework and contributed most of the perl code .he worked on some of this with martn del campo , who now maintains the php webpages we use to monitor the computation .sottile and teitler maintain the software and the shell scripts for controlling the computation and ensuring the integrity of the data , and teitler rewrote the library of our main mathematical routines in macaulay 2 .this project has led to unrelated research collaborations between hillar and martn del campo and between sottile and teitler .our team includes two more junior members who have not yet contributed code and will soon include an additional postdoc .the web of collaboration and mentoring is designed to help integrate them into future projects .all mathematics departments have significant , yet deeply underutilized computing resources available in office desktop computers .there are sociological problems that can arise , for example , when your colleague has email problems while his computer is running your software .while these can be overcome , there are often simpler alternatives .many institutions have some cluster computing resources , and there are regional and national supercomputers available for research use . computers in instructional labsare another resource . with sufficient interest and a modest expenditure ,these can be used for research .the texas a&m university mathematics department maintains computers for undergraduate instruction .johnson , the departmental systems administrator , installed job scheduling software enabling their use as a computing cluster outside of teaching hours .the availability of this resource , as much as our mathematical interests , was the catalyst for this computational experiment .it has been the source of of the computing for this experiment , which also used some desktop computers at texas a&m university and at sam houston state university , as well as personal laptops and clusters at the homes of garca - puente and sottile .the computer programming community has developed a vast library of free , open - source software that mathematicians can use for research purposes .the three software tools that we use the most , other than specialized mathematical software , are perl , mysql , and php .we selected them because of our familiarity with them and their widely available documentation .in addition to excellent manuals , there are many web pages showing documentation , tutorials , answers to frequently asked questions , and pieces of code .the distributed nature of our computation , its size , and the amount of data we store , led us to organize the computation around a database to store the results and status of the computation .for this , we chose mysql , a freely available high - quality database program .the actual database is located on a texas a&m university mathematics department server and may be accessed from anywhere in the world . in particular , we can ( and do ) monitor and manage the computation remotely .perl is a general - purpose programming language with especially strong facilities for text manipulation and communication with other programs , including mysql .we use perl to connect together the mathematical programs that actually perform our calculations ( singular , maple ) with the database .these data are viewed through web pages , which are dynamically generated using php , a programming language designed exactly for this purpose .our interface for monitoring the experiment is at our project s web page .this model computations on individual computers controlled by a central database scales well and is very flexible .it can run on a single computer using a local database ( e.g. a personal laptop ) , on a cluster at one s home or department , or on machines at different institutions .we wanted to conduct a large computational experiment using distributed , heterogeneous computer resources and have the computation be largely automated as well as robust , repeatable , and reliable . to accomplish this, we organized it around a database that records all aspects of the computation . in section [ s : calclabs ] we explain how we run this on a cluster and in section [ s : quality ] we discuss measures to enhance the quality of our data . here , we focus on the organization of the computation in our software : the basic mathematical procedures , interaction with the database , dividing this large computation into reasonable - sized pieces , and lastly selecting problems to compute and setting parameters of the computation .our computation is split between three subsystems , a controlling perl script and mathematical computations in singular and in maple .we explain these choices and how it all fits together .we chose perl for its strengths in text manipulation and its interface with mysql , our database software .as explained in section [ s : math ] , we need to generate a system of polynomials and compute an eliminant , many times . in previous computations , over of computer resources were spent on computing eliminants .we need the computation to be efficient and to run on freely available software .the methods of choice for elimination are algorithms based on grbner bases . for this, we tested three grbner basis packages ( , macaulay 2 , and singular ) on a suite of representative problems .when we made our choice of elimination software in the autumn of 2007 , singular was by far the fastest .given an eliminant , we need to determine its number of real roots , quickly and reliably .this requires a symbolic algorithm based on sturm sequences , and we needed software that we could install on our many different computers . while maple is proprietary software , it has the fastest and most reliable routine , realroot , for counting real roots among the software we tested .maple was also installed on the computers we planned to use and we trust realroot completely , having used it on several billions of previous computations .the mathematical routines of elimination and real root counting are symbolic ( i.e. , exact ) algorithms .we know of no satisfactory parallel implementations , so we achieve parallelism by running different computations on different cpu cores .when our software ( a perl script ) is run , it queries the database for a schubert problem to work on and then writes a singular input file to generate the desired polynomial systems and perform the eliminations .as it writes this file , perl selects random subsets of our master list of 111 rational numbers , ( re)orders them to make secant flags , and computes ( and stores ) the overlap number for each polynomial system .after the file is written , perl calls singular to run this file to compute the eliminants and write them to a file .perl then uses that output file to create an input file for maple , which it calls maple to run .maple determines the number of real roots of each eliminant , writing that to a file .finally , perl reads maple s output , pairs the numbers of real roots with the corresponding overlap number , posts these results to the database , and updates the state of the computation .a database is just a collection of tables containing data , together with an interface that allows efficient queries .we designed a database to organize this computation .it contains the schubert problems to be studied , the results ( e.g. table [ t : sample_table ] ) , and much else in between . at all times, the database contains a complete snapshot of the calculation . despite the size of this computation ,the database is quite small , about 750 kilobytes .we briefly explain some of the more important tables in our database and their role in this experiment .points contains the master list of 111 numbers used to construct secant flags .it is never altered .schubertproblems contains the list of all schubert problems we intend to study .section [ s : load ] explains how we add problems to the database .requests keeps track of how much computation is to be done on each schubert problem and what has been started .the administrators manually update requests to request more computation for particular problems , and the perl script updates requests when beginning a new computation on a schubert problem .results stores the frequency table of real solutions vs. overlap number and the amount of computing for each schubert problem .the perl script updates results after successfully completing a run .this table contains the information that our php web pages display .runninginstance contains a list of the computations that have started but have yet to be completed .we describe its function in section [ s : robustness ] .an important technical aspect of this computation is how we parcel out our computations to individual computers .there are many constraints .disk space is finite and large files are difficult to handle .some machines are available only for fixed time periods , and we must efficiently schedule their use .networks and servers have fixed capacity , so database queries should be kept to a minimum .additionally , our computations require vastly different resources , with some schubert problems taking less than gigahertz - seconds per instance while others we studied require in excess of gigahertz - seconds per instance .to balance these constraints , we divide the computation of each schubert problem into units that we call .each packet consists of between five and instances ( one to choices of the set ) , and ideally requires about 1 hour of computation .the packets are processed through one or more singular / maple input files , none containing more than 500 polynomial systems .when a computer queries the database for a problem , it is given a single packet to compute .the database stores the size and composition of the packets ( which is set when the problems are loaded into the database ) , and all information it records on the amount of computation is denominated in these packets .packets for computationally - intensive schubert problems require more than one hour of computing .schubert problems are sorted by the expected time required for a packet , and this is used in job scheduling to optimize performance .the largest computations are performed on machines with no limit on their availability and the others are parceled out according to the fit between the expected length of computation and the computer s availability .schubert problems are loaded into the database and the parameters of the computation are set using different software than the main computation .we have code to generate all schubert problems on a given grassmannian , determining the number of solutions to each schubert problem .this uses a grbner bases computation in a standard presentation of the cohomology ring , together with the giambelli formula for schubert classes .a perl script tests a subset of these problems to determine if the computation is feasible .an administrator selects feasible problems to load into the database with a software tool that runs several instances of each problem and decides , based upon the length of the computation , how to divide the computation into packets .it writes these data into the database and records its work in a log file .a is a simple way to organize computers to work together in which one machine ( the server ) communicates with the others ( its clients ) , but there is no communication between the clients .this model is optimal for performing many independent computations , for example , when running several computations ( e.g. computing grbner bases ) in parallel .it is a perfect match for our computational needs , and most of our experiment runs on a beowulf cluster .we describe our cluster , its job scheduling , and how we organized our use of this resource . in section [s : hardware ] we mentioned our use of instructional computers at texas a&m university which are collectively called the calclabs , as they are primarily used in calculus classes .the calclabs consist of 191 computers in five instructional labs and 12 in another lab .johnson installed the open source batch job scheduler torque resource manager on these computers , which are the clients , and on a server .users log in to the server to submit jobs to a queue from which jobs are given to computers as they becomes available .jobs are submitted to the queue with a specified time limit , both for administration and because each computer is available only for a limited time period ( typically nights , weekends , and holidays ) .jobs exceeding their time limit are terminated .a computer is given the first job ( if any ) whose time limit does not exceed its availability .as described in section [ s : packets ] , we sort schubert problems by the expected time to compute a packet to optimize this aspect of the scheduler .while we monitor the progress of our computation on the calclabs and sometimes submit jobs to the queue manually , we have largely automated the administration of this computation with the unix utility cron .cron executes scheduled tasks and is ideal for performing this administration .we have set up cron on our account on the server to run a shell script which monitors the queue , submitting jobs when the queue runs down .it does this intelligently , ensuring that the queue contains packets of differing lengths , tailored to the available computing .this runs once per hour to keep the queue well - stocked .other administrative tasks ( rotating logs , deleting old temporary files and archiving the database ) are performed once each day .since the scheduler submits one job to each machine , but our software runs on a single core , the jobs are themselves shell scripts that run one copy of our software for each cpu core on the given machine .an essential requirement in experimental science is that results are reproducible .this is easy to ensure in computational experiments by using deterministic algorithms reliably implemented in software .a second requirement is proper experimental design to ensure that a representative sample has been tested .computational experiments may marry these two requirements by using ( pseudo ) random number generators with fixed seeds for random sampling , and storing the seeds .we explain our choices for experimental design and how we ensure the reproducibility of our computation . in principle, this experiment could be rerun , recreating every step in every computation .this repeatability was essential for software development and testing , for checks on the integrity of our data , and it will allow us to rerun a large part of the experiment using different software on a different cluster . with a computation of this complexity , failures of the software and networks are inevitable .we explain how we recover from such failures , both those we anticipate and those that we do not .schubert problems come in a countably infinite family with only a few tens of thousands small enough to model on a computer .we study many of the computable schubert problems , and for each , we test thousands to millions of instances of the secant conjecture .previous computations have shown the value of such indiscriminate testing .the seminal example of the monotone conjecture ( the cover illustration for the issue of experimental mathematics in which the paper appeared ) was tested late in that experiment , and only after the undergraduate member of that team asked why we were omitting it .( sottile mistakenly thought it would be uninteresting . )also , extensive initial tests of the shapiro conjecture for flag manifolds appeared to affirm its validity ( it is in fact false ) .later was it realized that , by poor experimental design , only cases of the monotone conjecture had been tested , thereby overlooking counterexamples to the shapiro conjecture .we kept these lessons in mind when designing the current experiment .we were indiscriminate in selecting problems , studying all schubert problems on grassmannians in 4- , 5- , and 6-dimensional space , as well as on and , where is the grassmannian of -planes in -space .we have also studied many computable problems on and will study many on , , , , and . for these last six grassmannians , we are computing a random selection of problems .while it is hard to be precise , of the 7286 schubert problems on , we estimate that 3000 could be studied with our software .the rest are too large to compute in a reasonable time or are infeasible . for a problem involving schubert conditions , there are ways to order the intervals for secants .our software randomly reorders the conditions before constructing secant flags , to remove bias from the given ordering .more serious is the question of how uniformly we are selecting from among all secant flags .we do not have a satisfactory answer to this .while one may believe that random subsets of our 111 master numbers ( shown in figure [ f : rp1 ] ) are fairly uniform modulo the action of , we instead offer experience gained in the previous experiment studying the monotone conjecture . there, the results of a computation ( e.g. verifying the conjecture and an inner border as in table [ t : sample_table ] ) did not appear to depend upon how we selected subsets of a master set of numbers .the selections included such schemes as all subsets of the numbers , or random subsets of the first 20 prime numbers , or random subsets of all rational numbers where are the integer points closest to , for .( this last scheme is likely nearly uniform . )random choices are made with the help of a pseudorandom number generator .its output depends deterministically , but to all appearances unpredictably , on a state variable called a , which is deterministically updated after each call .thus two sequences of calls to a pseudorandom number generator beginning with different seeds give unrelated sequences of integers , but if the seeds are the same , the sequences are identical . we take advantage of this by generating an initial seed for each schubert problem before computing its first packet .this initial seed is determined by the current state of the computer .when packet is begun , the seed is set to this initial seed and calls are made to the pseudorandom number generator to set the seed for that packet . in this way , the computation of a given schubert problem is completely determined given this initial seed , which is stored in our database .this exact reproducibility of results in a computational experiment is much stronger than the notion of reproducibility for statistical results , and is a feature that we exploit .we use it for software development to test upgrades and to ensure that the software runs properly on different machines .for this , we simply copy some problems and their initial seeds to an empty database , run all requested computations , and then compare the new results with the old results .( they have always agreed . )we have a software tool to automate this process and now use it on individual laptops to rerun the computation for some schubert problems to validate the data for these problems from the initial run .we understand that gimps also uses such double - checking for validation .more interesting , and we believe unprecedented in mathematical experimentation , we are starting to use this feature to rerun a large segment of the calculation on a different cluster running a different version of linux and different hardware and also using different software for our basic mathematical routines .we use macaulay 2 for elimination in place of singular and sarag in place of maple for counting real roots of univariate polynomials . besides providing an independent check on the data we generate , this will also give a direct comparison of the efficacy of different implementations of these basic mathematical routines .as with any complicated task , we can not avoid the unexpected ( the unknown unknowns ) , and have designed our software to recover from the many different failures that inevitably occur . for this , we have several interlocking systems to prevent corrupted calculations from being entered in the database .we also rerun corrupted packets , and even rerun all or part of a schubert problem whose data appear suspect . first , our software has checks to ensure that tasks ( connecting with the database , calling client programs , and reading / writing data in files ) are successfully executed , and which terminate its running when something untoward is detected .other errors , such a network errors or unmounted file systems , cause noisier failures , which are captured by our log files for possible diagnosis .there are even less graceful failures in a computation , such as power outage , termination of jobs by the job server , or simple human error .all of these abort the computation of a packet and therefore lead to packets whose computation has started , but whose results have not been submitted to the database .the table runninginstance in our database keeps track of packets whose computation has started but not finished , together with the expected completion time .packets that have been terminated in any way are recognized by having an expected completion time that has long passed .when our software queries the database for a problem to work on , it first checks for any such overdue packets .if one is found , it deletes that record from runninginstance and creates a new record corresponding to this new computation .otherwise it finds a fresh packet to compute , creating a record in runninginstance . upon successful completion, these records are deleted .one possible graceful failure is for a computation to end , but discover that its record in runninginstance has been deleted and superseded by another preventing a second submission of the same data to the database . while this method works for the few to hundreds ( out of thousands ) of packets each day that fail before successful completion ,sometimes our data becomes corrupted , or possibly corrupted .we also have a software tool that finds the most recent database backup where that schubert problem is uncorrupted and restores that schubert problem to this previous state .thus we simply recompute all or part of the computations for that schubert problem .we plan to continue this work of mathematical discovery through advanced computing . at the conclusion of this experiment in december 2009 , we will write a full paper describing its mathematical results . as of may 2009 ,the secant conjecture was verified in each of the over 250 million instances we checked . in 2010we plan to start a related experiment , testing a common generalization of the monotone and secant conjectures .this will last about one year .while there is much more to be discovered studying these variants of the shapiro conjecture , we plan a long - term , multifaceted , and systematic study of galois groups of schubert problems , building on the work in .we mention one side benefit from this computation . in march 2009 , after sharing timing data from a benchmark computation with mike stillman , a developer of macaulay 2 , he rewrote some code that improved its running by several orders of magnitude .we have described how and why we set up , organized , and are running a very large computational experiment to study a conjecture in pure mathematics , and how it is possible to harness underused yet widely available computing resources for mathematical research .we believe this model a team - based approach to designing and monitoring a large computational experiment can fruitfully be replicated in other settings and we encourage you to try it .sara billey and ravi vakil , _ intersections of schubert varieties and other permutation array schemes _ , algorithms in algebraic geometry , i m a vol .146 , springer , new york , 2008 , pp .2154 .to3em , _ from enumerative geometry to solving systems of polynomials equations _, computations in algebraic geometry with macaulay 2 , algorithms comput . math . , vol . 8 ,springer , berlin , 2002 , pp .101129 .to3em , _ enumerative real algebraic geometry _ , algorithmic and quantitative real algebraic geometry ( piscataway , nj , 2001 ) , dimacs ser .discrete math ., vol . 60 , amer . math. soc . , providence , ri , 2003 , pp .
|
we describe a general framework for large - scale computational experiments in mathematics using computer resources that are available in most mathematics departments . this framework was developed for an experiment that is helping to formulate and test conjectures in the real schubert calculus . largely using machines in instructional computer labs during off - hours and university breaks , it consumed in excess of 350 gigahertz - years of computing in its first six months of operation , solving over 1.1 billion polynomial systems .
|
entanglement has been used as a key resource in many tasks in quantum information processing . as a famous example of these tasks, quantum - state teleportation means that an unkown quantum state is transferred among distant parties without physically sending the particle .another important task is teleportation of a quantum operation , where instead of an unkown state , an unknown quantum operation is transferred without physically sending the device .if the teleported operation acts also on a remote unknown state , this task can also be called `` remote implementation of operation '' .recently , researches on this aspect are made on both theory and experiment .when the operation is completely unknown , this remote implementation has to be completed via so - called bidirectional quantum state teleportation ( bqst ) , in which the receiver teleports his target state to the sender , then after applying the operation , the sender teleports it back to the receiver .apparently , only a pair of quantum - state teleportations and one local quantum operation are implemented , and the required entanglement resources double that in quantum - state teleportations .it is very interesting when the operation is partially unknown . here ,`` partially unknown '' quantum operations mean they are belonging to some restricted sets that satisfy some given restricted conditions .there are some protocols via which the partially unkown operation can be remote implemented using less resources than via bqst . in other words ,any operation in the restricted set with respect to a protocol can be remote implemented via this protocol .entanglement is a scarce resource in quantum information processing , and is more expensive than classical resources such as classical communication .so , these protocols need also use entanglement resources as little as possible , and this economization is not insignificant . in the case of one qubit operations, there are two such restricted sets , and operations in either of them can be teleported via a protocol ( hpv ) using the least entanglement resources .these two restricted sets are one set which consists of digonal operations and one which consists of antidigonal operations . then in fact , hpv protocol may be considered as a group of two sub protocols with these two restricted sets respectively . in hpv protocol ,only one _ e_-bit of entanglment resources are required , and this is optimal .these results have been developered for multiqubits cases .operations in anyone of the restricted sets in which there is just one none zero element in any row and any column , can be teleported via an extended protocol ( wang ) using the least entanglement resources . in the case of _n_-qubit oprations , there are such restricted sets , and _ e_-bits are required in wang protocol , and this is optimal too .furthermore , hpv protocol is apparently a special case of wang s when .the restricted sets in wang protocol are matrices that has just one none zero element in any column or any row .if the none zero elements are replaced by full rank squre matrices which have the same order each other , can we find protocols via which the operations can be teleported using the least entanglement resources ? in this paper , we will propose a protocol by hybriding wang protocol and bqst , and furthermore , as their generalization and combination , it can be reduced to wang protocol , hpv protocol and bqst .this protocol will work when the restricted sets are block matrices which has just one none zero block in any column or any row , and every block of which is a full rank matrix .this paper is organized as follows .we will introduce hpv protocol and wang protocol firstly in sec .then , we will specify our new protocol and point out its optimality in sec .we summarize our conclusions and discuss some problems in sec .[ con ] . in apprendix[ prove ] , we will give the proof of the new protocol .in the scene of remote implementation of quantum operations , alice is set as a sender and bob is set as a receiver .alice has the device that implement the local operation , and bob has the unknown qubits to be operated .they also share the necessary entanglement resources and have some accessorial qubits that assist them to accomplish the object . as a result of an available protocol, bob must finally get the qubits whose state is the same as bob s initial qubits state operated directly by the operation under the precondition without noise channel .furthermore , the protocol must involve only local quantum operations and classical communication ( locc ) . in ref . , the authors proposed the remote implementation of a quantum operation in some given restricted set .they studied the case of one - qubit operations , and propose a simple but available protocol ( hpv ) , and demostrated its optimality . in the simplified hpv protocol, the initial state of the joint system of alice and bob is where is a bell states that is shared by alice and bob .the qubit at alice s side named qubit _ a _ , and the other at bob s side named qubit _b_. the qubit _ y _ is the qubit to be operated at bob s side , and it is entirely unkown , that is , it can be in any pure state .the quantum operation to be remote implemented belongs to one of the following two restricted sets it means that hpv protocol works when the operation belongs to either of them .we will use to denote the opertaion to be remote implemented . in every actual processing , can only be exactly one value , and it is kown by alice .before the protocol starts , alice should tell bob the information of the restricted sets using one bit through classical communication .hpv protocol can be expressed as following steps . [[ step-1-bobs - preparation . ] ] step 1 : bob s preparation .+ + + + + + + + + + + + + + + + + + + + + + + + + + bob first performs a controlled - not using qubit _y _ as the control and qubit _ b _ as the target. then , he measures the qubit _ b _ in the computational bases .so , bob s preparation operations can be written as where is a identity matrix and are the pauli matrices . [ [ step-2-classical - communication - from - bob - to - alice . ] ] step 2 : classical communication from bob to alice .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + bob tells alice his measurement result using one classical bit via a classical communication channel .[ [ step-3-alices - sending . ] ] step 3 : alice s sending . + + + + + + + + + + + + + + + + + + + + + + + + after receiving the classical bit , alice performs on qubit _ a _ , and then performs the operation on qubit _a_. then , alice performs a hadamard transformation on qubit _ a _ , and then measure it in the computational bases .so , alice s sending operations can be written as ,\ ] ] where is the hadamard transformation .[ [ step-4-classical - communication - from - alice - to - bob . ] ] step 4 : classical communication from alice to bob .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + alice tells bob her measurement result using one classical bit via a classical communication channel . [ [ step-5-bobs - recovery . ] ] step 5 : bob s recovery .+ + + + + + + + + + + + + + + + + + + + + + + after receiving alice s bit , bob firstly performs on qubit _ y _ , and then performs on it . here , is when , and is when .so , bob s recovery operations can be written as it is easy to conclude that after all steps finished bob s qubit _ y _ results in the state .this means that the protocol is faithful and determined .all of the operations in the protocol can be jointly written as \left[{\mathcal{s}}_a(a , b;d)\otimes\sigma_0^b\otimes\sigma_0^y\right ] \left[\sigma_0^a\otimes{\mathcal{p}}_b(b)\right].\ ] ] so , the processing of the protocol can be expressed as we plot the quantum circuit of the hpv protocol in fig .[ fig_hpv ] .quantum circuit of the hpv protocol , where is the quantum operation to be remotely implemented , is the hadamard gate , are identity matrices or not gates ( ) with respect to or , respectively , and is an identity matrix when or a phase gate ( ) when .`` '' indicates the transmission of classical communication . ]-0.1 in wang protocol deals with the case of multiqubits . in the case of qubits ,the initial state of alice and bob is where is an arbitrary pure state .alice has the qubits , and bob has the qubits and .the operation to be remote implemented is in one of the following restricted sets where indicates the decimal system , so , , , etc . and, is a permutation of the list , where labels all of the permutations , and also labels all of the restricted sets .the initial state of qubits can be similarly written as similarly , alice should tell bob the information of the restricted set , so that bob can choose the corresponding rescovery operation from wang protocols . wang protocol can be expressed as following steps . [[ step-1-bobs - preparation.-1 ] ] step 1 : bob s preparation .+ + + + + + + + + + + + + + + + + + + + + + + + + + bob first performs controlled - not respectively using qubits as the controls and qubits as the targets .then , he measures the qubits in the computational bases .so , bob s preparation operations can be written as .\ ] ] [ [ step-2-classical - communication - from - bob - to - alice.-1 ] ] step 2 : classical communication from bob to alice .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + bob transfers his mesurement results to alice using classical bits .[ [ step-3-alices - sending.-1 ] ] step 3 : alice s sending .+ + + + + + + + + + + + + + + + + + + + + + + + after receiving the classical bits , alice performs on qubit respectively , and then performs the _ n_-qubit operation on qubits .then , alice performs a hadamard transformation on qubit respectively , and then measure them respectively in the computational bases .so , alice s sending operations can be written as [ [ step-4-classical - communication - from - alice - to - bob.-1 ] ] step 4 : classical communication from alice to bob .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + alice tells bob her measurement results using n classical bits via classical communication channels . [[ step-5-bobs - recovery.-1 ] ] step 5 : bob s recovery .+ + + + + + + + + + + + + + + + + + + + + + + bob firstly performs on qubits , and then performs respectively on them . here , is a _n_-qubit transformation depended only on .that is , it depends only on the kind of restricted sets .so , bob s recovery operations can be written as all of the operations in the protocol can be written as after the protocol is finished , the final state becomes thus , the protocol is faithful and determined , too .it should be pointed out that these two protocols in this section are both available even if bob s qubits that to be operated are in mixed state , because all operations in them are linear .thus , the qubits to be operated can be indeed general , whether they are in pure state or in mixed state .consider the following restricted sets of qubits operations where can be any full rank matrices .they are similar to the restricted sets in wang protocol , and just replace the numbers by the matrices .so we can attempt to deal with the anterior operations of -qubit similarly to wang protocol , and deal with the posterior operations of -qubit via bqst . because all of the operations in these protocols are linear, we can expect that this meathod be successful .then , this protocol could be called `` hybrid protocol '' , and apparently , bell states are required .the protocol will be specified thereinafter in this section , and the full proof can be found in appendix [ prove ] . of course , in this prototol , alice should firstly tell bob the information of the restricted set , so that bob can choose the corresponding rescovery operation just as in hpv protocol and in wang protocol .the initial state of alice and bob is where is an arbitrary pure state .alice has the qubits , and bob has the qubits and .the hybrid protocol can be expressed as following steps .[ [ step-1-bobs - preparation.-2 ] ] step 1 : bob s preparation .+ + + + + + + + + + + + + + + + + + + + + + + + + + bob s operations in this step is the same as in wang protocol , that is : ,\ ] ] [ [ step-2-classical - communication - and - teleportations - from - bob - to - alice . ] ] step 2 : classical communication and teleportations from bob to alice .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + in this step , bob first tells alice his measurement results , then teleports the qubits to alice s qubits respectively using the bell states . [[ step-3-alices - sending.-2 ] ] step 3 : alice s sending .+ + + + + + + + + + + + + + + + + + + + + + + + in this step , alice s operations is similar to wang protocol .she need only replace the operation in wang protocol by the operation .her operations can be expressed as [ [ step-4-classical - communication - and - teleportations - from - alice - to - bob . ] ] step 4 : classical communication and teleportations from alice to bob .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + alice first tells bob her measurement results .then , she teleports the qubits to bob s qubits respectively using the bell states . [ [ step-5-bobs - recovery.-2 ] ] step 5 : bob s recovery .+ + + + + + + + + + + + + + + + + + + + + + + bob first does the same as in wang protocol . then , he performs additional swapping operations on the qubits respectively .the swapping operation can be expressed as and apparently , just exchanges the states of qubits .so , his operations in this step can be expressed as after the protocol is completed , the final state of qubits becomes to this end , the initial aim is accomplished , and the protocol is faithful and determined . in this protocol , _ e_-bits are required .these entanglement resources are necessary for any protocol that can be used to faithfully teleport any operation in one of the restricted sets .this conclusion can be drawn using similar methods as in ref .we can also get it by considering the following set where , can be any -qubit operation .apparently , , so if a protocol is available for restricted set , it is also available for restricted set .but operations in are only direct products of an _ n_-qubit operation and an -qubit operation .in fact , remote implementations of such operations can be separated into two irrelevant parts , one is for the anterior qubits , the other is for the posterior qubits .so , from the ref . and , any protocol that can be used to faithfully teleport any operation in set has to consume no less than _ e_-bits entanglement resources .thus , our protocol is optimal in this case . and , because our restricted sets are not the forgoing trivial one , our protocol is nontrivial too .furthermore , when our protocol deduces to wang protocol , and when it deduces to bqst .especially , when and it becomes hpv protocol .in this paper , we consider the remote implementation of operations in the restricted sets that have a block form. operations in restricted sets like this can not be dealt with by any anterior protocol except for bqst .but , too many entanglement resources are required if directly using bqst protocol .we have proposed a protocol that can be used to deal with the case that the restricted sets have a form specified in anterior section .any anterior protocol can be regarded as a special case of this protocol .then we have pointed out that our protocol is optimal , that is , it consumes the least entanglment resources .there are many other restricted sets that our protocol can not be used to deal with .however , because all of the elementary quantum gates can be included in our restricted sets , after using wang s combined protocol , this problem is not serious . perhaps in a process of remote implementation of a quantum algorithm , our protocols are enough .of course , further researches can be made on quantum operations structure to classify the restricted sets , and on new protocols for every class of restricted sets .our method would provide some clues on these researches .furthermore , our method could also be used to the combined and the controlled remote implementations in ref . .remote implementations of quantum operations is a critical step for the implementation of quantum ditributing computation and teleportation - based models of quantum computation .investigations on it can give helps to the researches of the forgoing issues .we acknowledge all the collaborators of our quantum theory group at the institute for theoretical physics of our university .this work was funded by the national natural science foundation of china under grant no .in this appendix , we prove the hybrid protocol proposed in sec .[ new ] , and some detailed technologies are similar to the ref . the initial state of the qubits can always be expressed as where or need not be orthogonal each other .so , in the sense of swapping transformations , the initial state of the total system can be expressed as after bob s preparation , the state becomes ( |00k_i\rangle+|11k_i\rangle)_{a_ib_iy_i } \right\ } \otimes |\eta_{k_1,k_2,\cdots , k_{n}}\rangle_{y_{n+1}\cdots y_{n+m}}.\end{aligned}\ ] ] from , ( |00k_i\rangle+|11k_i\rangle)_{a_ib_iy_i } = \sigma_{b_i}^{a_i } { \mbox{}}_{a_ib_iy_i}.\ ] ] so , \otimes |\eta_{k_1,k_2,\cdots , k_{n}}\rangle_{y_{n+1}\cdots y_{n+m } } \nonumber \\ & = & \left(\bigotimes_{m = n+1}^{n+2m}{\mbox{}}_{a_mb_m}\right ) \otimes \bigotimes_{n=1}^{n } { \mbox{}}_{b_n } \otimes \frac{1}{\sqrt{2^n } } \sum_{k_1,k_2,\cdots , k_{n}=0}^{1 } y_{k_1,k_2,\cdots , k_{n } } \nonumber \\ & & \left[\bigotimes_{i=1}^{n } \sigma_{b_i}^{a_i } { \mbox{}}_{a_i}\right ] \otimes \left[\bigotimes_{j=1}^{n } { \mbox{}}_{y_j}\right ] \otimes |\eta_{k_1,k_2,\cdots , k_{n}}\rangle_{y_{n+1}\cdots y_{n+m}}. \nonumber \\\end{aligned}\ ] ] after the teleportations from bob to alice , the state of qubits are replaced by the qubits .so , the state of qubits becomes \otimes \left[\bigotimes_{j=1}^{n } { \mbox{}}_{y_j}\right ] \otimes |\eta_{k_1,k_2,\cdots , k_{n}}\rangle_{a_{n+1}\cdots a_{n+m}}. \nonumber \\\end{aligned}\ ] ] after the step of alice s sending , the state of qubits becomes \otimes \left[\bigotimes_{j=1}^{n } { \mbox{}}_{y_j}\right ] \otimes |\eta_{k_1,k_2,\cdots , k_{n}}\rangle_{a_{n+1}\cdots a_{n+m } } \nonumber \\ & = & \left(\bigotimes_{m=1}^n { \mbox{}}_{a_m}{\mbox{}}\right ) \left(\bigotimes_{m=1}^n h^{a_m}\right ) t^r_{n , m}(x , g)^{a_1a_2\cdots a_{n+m } } \nonumber \\ & & \sum_{m=1}^{2^n } y_m & = & \sum_{m=1}^{2^n } y_{m } |m , d\rangle_{y_1\cdots y_n } \nonumber \\ & & \otimes \sum_{j=1}^{2^n } \left\ { \left[\bigotimes_{i=1}^n ( |a_i\rangle_{a_i}\langle a_i| h^{a_i } ) \right ] \times |p_j(x),d\rangle \langle j , d| \right\ } \times |m , d\rangle_{a_1\cdots a_n } \otimes g_j |\eta_{m}\rangle_{a_{n+1}\cdots a_{n+m } } \nonumber \\ & = & \sum_{m=1}^{2^n } y_{m } |m , d\rangle_{y_1\cdots y_n } \otimes \left[\bigotimes_{i=1}^n ( |a_i\rangle_{a_i}\langle a_i| h^{a_i } ) \right ] \times |p_m(x),d\rangle_{a_1\cdots a_n } \otimes g_m |\eta_{m}\rangle_{a_{n+1}\cdotsa_{n+m}}. \nonumber \\\end{aligned}\ ] ] denote then , \nonumber \\ & = & \sum_{m=1}^{2^n } y_{m } |m , d\rangle_{y_1\cdots y_n } \otimes g_m |\eta_{m}\rangle_{a_{n+1}\cdots a_{n+m } } \otimes \left[\bigotimes_{i=1}^n ( -1)^{a_il_m^i(x ) } |a_i\rangle_{a_i } \right ] \nonumber \\ & = & \left[\bigotimes_{i=1}^n |a_i\rangle_{a_i } \right ] \otimes \sum_{m=1}^{2^n } \left[\prod_{k=1}^n ( -1)^{a_kl_m^k(x)}\right ] y_{m } |m , d\rangle_{y_1\cdots y_n } \otimes g_m |\eta_{m}\rangle_{a_{n+1}\cdots a_{n+m}}. \nonumber \\\end{aligned}\ ] ] apparently , so in the step of bob s recovery , before the swapping oparations are implemented , the state of qubits becomes y_{m } \left(\bigotimes_{i=1}^nr(a_i)^{y_i}\right ) |p_m(x),d\rangle_{y_1\cdots y_n } \otimes g_m |\eta_{m}\rangle_{b_{n+m+1}\cdots b_{n+2 m } } \nonumber \\ & = & \sum_{m=1}^{2^n } \left[\prod_{k=1}^n ( -1)^{a_kl_m^k(x)}\right ] y_{m } \left(\bigotimes_{i=1}^nr(a_i)^{y_i } |l_m^i(x)\rangle_{y_i } \right ) \otimes g_m |\eta_{m}\rangle_{b_{n+m+1}\cdots b_{n+2 m } } \nonumber \\ & = & \sum_{m=1}^{2^n } \left[\prod_{k=1}^n ( -1)^{a_kl_m^k(x)}\right ] y_{m } \left(\bigotimes_{i=1}^n ( -1)^{a_il_m^i(x ) } |l_m^i(x)\rangle_{y_i } \right ) \otimes g_m |\eta_{m}\rangle_{b_{n+m+1}\cdots b_{n+2 m } } \nonumber \\ & = & \sum_{m=1}^{2^n } y_{m } \bigotimes_{i=1}^n |l_m^i(x)\rangle_{y_i } \otimes g_m |\eta_{m}\rangle_{b_{n+m+1}\cdotsb_{n+2 m } } \nonumber \\ & = & \sum_{m=1}^{2^n } y_{m } |p_m(x),d\rangle_{y_1\cdots y_n } \otimes g_m |\eta_{m}\rangle_{b_{n+m+1}\cdotsb_{n+2m}}. \nonumber \\\end{aligned}\ ] ] after the swapping oparations , the final state of qubits becomes 20 m. b. plenio and v. vedral , contemp .phys . * 39 * , 431 ( 1998 ) c.h .bennett , g. brassard , c. crpeau , r. jozsa , a. peres , and w. k. wootters , phys . rev. lett . * 70 * , 1895 ( 1993 ) s. f. huelga , j. a. vaccaro , a. chefles , and m. b. plenio , phys . rev .a * 63 * , 042303 ( 2001 ) s. f. huelga , m. b. plenio , and j. a. vaccaro , phys .a * 65 * , 042316 ( 2002 ) a. m. wang , phys .a * 74 * , 032317 ( 2006 ) y .- f huang , x .- f ren ,zhang , l .- m .duan , and g .- c guo , phys .lett . * 93 * , 240501 ( 2004 ) g .- y xiang , j. li , g .- c .guo , phys .a * 71 * , 044304 ( 2005 ) s. f. huelga , m. b. plenio , g .- y .xiang , j. li , and g .-c guo , j. opt .b : quantum semiclass . opt .* 7 * ( 2005 ) s384 a. m. wang , phys .a * 75 * , 062323 ( 2007 )
|
we propose a protocol of remote implementations of quantum operations by hybridizing bidirectional quantum state teleportation s ( bqst ) and wang s one . the protocol is available for remote implemetations of quantum operations in the restricted sets specified in sec . [ new ] . we also give the proof of the protocol and point out its optimization . as an extension , this hybrid protocol can be reduced to bqst and wang protocols .
|
[ [ mastermind - at - a - glance . ] ] * _ mastermind at a glance . _* + + + + + + + + + + + + + + + + + + + + + + + + + + + _ mastermind _ is a code - breaking board game released in 1971 , which sold over 50 million sets in 80 countries .the israeli postmaster and telecommunication expert mordecai meirowitz is usually credited for inventing it in 1970 , although an almost identical paper - and - pencil game called _ bulls and cows _ predated mastermind , perhaps by more than a century .the classic variation of the game is played between a _codemaker _ , who chooses a secret sequence of four colored pegs , and a _ codebreaker _ , who tries to guess it in several attemptsthere are six available colors , and the secret code may contain repeated colors . after each attempt, the codebreaker gets a _ rating _ from the codemaker , consisting in the number of correctly placed pegs in the last guess , and the number of pegs that have the correct color but are misplaced .the rating does not tell which pegs are correct , but only their amount .these two numbers are communicated by the codemaker as a sequence of smaller black pegs and white pegs , respectively ( see , where the secret code is concealed behind a shield , and each guess is paired with its rating ) .if the codebreaker s last guess was wrong , he guesses again , and the game repeats until the secret code is found , or the codebreaker reaches his limit of ten trials . ideally , the codebreaker plans his new guesses according to the information he collected from the previous guesses . depicts a complete game of mastermind , where colors are encoded as numbers between zero and five , and the codebreaker finally guesses the code at his sixth attempt .guess & rating + & + & + & + & + & + & + [ tab:1 ] [ [ previous - work . ] ] * _ previous work ._ * + + + + + + + + + + + + + + + + + + recently , focardi and luccio pointed out the unexpected relevance of mastermind in real - life security issues , by showing how certain api - level bank frauds , aimed at disclosing user pins , can be interpreted as an extended mastermind game played between an insider and the bank s computers . on the other hand ,goodrich suggested some applications to genetics of the mastermind variation in which scores consist of black pegs only , called _ single - count ( black peg ) mastermind _ . as a further generalization of the original game , we may consider _-mastermind _ , where the secret sequence consists of pegs , and there are available colors .chvtal proved that the codebreaker can always determine the secret code in -mastermind after at most guesses , each computable in polynomial time , via a simple divide - and - conquer strategy .this upper bound was later lowered by a constant factor in , while goodrich also claimed to be able to lower it for single - count ( black peg ) mastermind , hence using even less information .unfortunately , after a careful inspection , goodrich s method turns out to outperform chvtal s several techniques given in asymptotically ( as grows , and is a function of ) only if , for every .however , despite being able to guess any secret code with an efficient strategy , the codebreaker may insist on really minimizing the number of trials , either in the worst case or on average .knuth proposed a heuristic that exhaustively searches through all possible guesses and ratings , and greedily picks a guess that will minimize the number of eligible solutions , in the worst case .this is practical and worst - case optimal for standard -mastermind , but infeasible and suboptimal for even slightly bigger instances .the size of the solution space is employed as an ideal quality indicator also in other heuristics , most notably those based on genetic algorithms . in order to approach the emerging complexity theoretic issues , stuckman and zhang introduced the mastermind satisfiability problem ( msp ) for -mastermind , namely the problem of deciding if a given sequence of guesses and ratings has indeed a solution , and proved its -completeness .similarly , goodrich showed that also the analogous satisfiability problem for single - count ( black peg ) mastermind is -complete .interestingly , stuckman and zhang observed that the problem of detecting msp instances with a unique solution is turing - reducible to the problem of producing an eligible solution .however , the determination of the exact complexity of the first problem is left open .[ [ our - contribution . ] ] * _ our contribution . _* + + + + + + + + + + + + + + + + + + + + + in this paper we study # msp , the _ counting problem _ associated with msp , i.e. , the problem of computing the number of solutions that are compatible with a given set of guesses and ratings .we do this for standard -mastermind , as well as its single - count variation with only black peg ratings , and the analogous single - count variation with only white peg ratings , both in general and restricted to instances with a fixed number of colors .our main theorem states that , in all the aforementioned variations of mastermind , # msp is either trivially polynomial or -complete under _ parsimonious reductions_. capturing the true complexity of # msp is an improvement on previous results ( refer to ) because : * evaluating the size of the search space is a natural and recurring subproblem in several heuristics , whereas merely deciding if a set of guesses has a solution seems a more fictitious problem , especially because in a real game of mastermind we already know that our previous guesses and ratings _ do _ have a solution . *the reductions we give are parsimonious , hence they yield stronger versions of all the previously known -completeness proofs for msp and its variations . moreover , we obtain the same hardness results even for -mastermind , whereas all the previous reductions used unboundedly many colors ( see ) . *our main theorem enables simple proofs of a wealth of complexity - related corollaries , including the hardness of detecting unique solutions , which was left open in ( see ) .[ [ paper - structure . ] ] * _ paper structure . _ * + + + + + + + + + + + + + + + + + + + + in we define # msp and its variations .contains a statement and proof of our main result , , and an example of reduction . inwe apply to several promise problems with different assumptions on the search space , and finally in we suggest some directions for further research .[ [ codes - and - ratings . ] ] * _ codes and ratings . _ * + + + + + + + + + + + + + + + + + + + + + + for -mastermind , let the set be the _ code space _ , whose elements are _ codes _ of numbers ranging from to . following chvtal , we define two _ metrics _ on the code space .if and are two codes , let be the number of subscripts with , and let be the largest , with running through all the permutations of . as observed in , and are indeed distance functions , respectively on and ( i.e. , the code space where codes are equivalent up to reordering of their elements ). given a secret code chosen by the codemaker , we define the _ rating _ of a guess , for all the three variants of mastermind we want to model . * for standard mastermind , let . * for single - count black peg mastermind ,let . * for single - count white peg mastermind ,let .a guess is considered correct in single - count white peg -mastermind whenever its rating is , therefore the secret code has to be guessed only up to reordering of the numbers . as a consequence , the codebreaker can always guess the code after attempts : he can determine the number of pegs of each color via monochromatic guesses , although this is not an optimal strategy when outgrows .on the other hand , order does matter in both other variants of mastermind , where the guess has to coincide with the secret code for the codebreaker to win .[ [ satisfiability - problems . ] ] * _ satisfiability problems . _* + + + + + + + + + + + + + + + + + + + + + + + + + + + + next we define the mastermind satisfiability problem for all three variants of mastermind .msp ( respectively , msp - black , msp - white ) .+ _ input : _ , where is a finite set of queries of the form , where and is a rating .+ _ output : _yes if there exists a code such that ( respectively , , ) for all .no otherwise .msp and msp - black are known to be -complete problems .we shall see in how msp - white is -complete , as well .further , we may want to restrict our attention to instances of mastermind with a fixed number of colors .thus , for every constant , let -msp be the restriction of msp to problem instances with exactly colors ( i.e. , whose input is of the form ) .similarly , we define -msp - black and -msp - white .[ [ counting - problems . ] ] * _ counting problems . _* + + + + + + + + + + + + + + + + + + + + + + all the above problems are clearly in , thus it makes sense to consider their _ counting versions _ , namely # msp , # msp - black , # -msp , and so on , which are all problems .basically , these problems ask for the size of the solution space after a number of guesses and ratings , i.e. , the number of codes that are coherent with all the guesses and ratings given as input .recall that reductions among problems that are based on oracles are called _ turing reductions _ and are denoted with , while the more specific reductions that map problem instances preserving the number of solutions are called _ parsimonious reductions _ , and are denoted with .each type of reduction naturally leads to a different notion of -completeness : for instance , # -sat is -complete under turing reductions , while # -sat is -complete under parsimonious reductions .problems that are -complete under parsimonious reductions are _ a fortiori _ -complete , while it is unknown whether all -complete problems are -complete , even under turing reductions .next we give a complete classification of the complexities of all the counting problems introduced in .[ thm : main ] 1 .[ thm : maina ] # msp , # msp - black and # msp - white are -complete under parsimonious reductions .[ thm : mainb ] # -msp and # -msp - black are -complete under parsimonious reductions for every .[ thm : mainc ] # -msp - white is solvable in deterministic polynomial time for every .( notice that # -msp and # -msp - black are trivially solvable in deterministic linear time . ) [ l1 ] for every , # -msp - white is solvable in deterministic polynomial time .in there are only possible codes to check against all the given queries , hence the whole process can be carried out in polynomial time , for any constant .[ l2 ] for every , + # -msp # -msp , # -msp - black # -msp - black . given the instance of # -msp ( respectively , # -msp - black ) , we convert it into , where is a sequence of consecutive s , and ( respectively , ) .the new query implies that the new color does not occur in the secret code , hence the number of solutions is preserved and the reduction is indeed parsimonious .[ l3 ] # -sat # msp - white . given a 3-cnf boolean formula with variables and clauses , we map it into an instance of msp - white . for each clause of , we add three fresh _ auxiliary variables _ , , . for each variable ( including auxiliary variables ) , we define two colors and , representing the two possible truth assignments for .we further add the _ mask color _ , thus getting colors in total .we let ( we may safely assume that ) , and we construct as follows . 1 . [ l3s1 ]add the query .[ l3s2 ] for each variable , add the query .3 . [ l3s3 ] for each clause ( where each literal may be positive or negative ) , add the query .[ l3s4 ] for each clause , further add the query . by , the mask color does not occur in the secret code ; by , each variable occurs in the secret code exactly once , either as a positive or a negative literal .moreover , by , at least one literal from each clause must appear in the secret code .depending on the exact number of literals from that appear in the code ( either one , two or three ) , the queries in and always force the values of the auxiliary variables , and .( notice that , without , there would be two choices for and , in case exactly two literals of appeared in the code . ) as a consequence , the reduction is indeed parsimonious .[ l4 ] # -sat # -msp - black .we proceed along the lines of the proof of , with similar notation .we add the same auxiliary variables , , for each clause , and we construct the instance of -msp - black , where .this time we encode literals as _ positions _ in the code : for each variable , we allocate two specific positions and , so that ( respectively , ) in code if and only if variable is assigned the value true ( respectively , false ) . notice that , in contrast with , we are not using a mask color here . is constructed as follows . 1 . [ l4s1 ]add the query .[ l4s2 ] for each variable , add the query , where if and only if .[ l4s3 ] for each clause , add the query , where if and only if .( without loss of generality , we may assume that , and are occurrences of three mutually distinct variables . )[ l4s4 ] for each clause , further add the query , where if and only if . by , every solution must contain times and times , in some order .the semantics of , and is the same as that of the corresponding steps in , hence our construction yields the desired parsimonious reduction . indeed , observe that , if altering bits of a binary code increases its rating by , then exactly of those bits are set to the right value . in , altering bits of the code in increases its rating by , hence exactly one of those bits has the right value , which means that in any solution .similarly , in ( respectively , ) , ( respectively , ) and , hence exactly three ( respectively , two ) of the bits set to are correct ( cf . the ratings in ) . [ l5 ] # -sat # -msp .we replicate the construction given in the proof of , but we use the proper ratings : recall that the ratings of msp are pairs of scores ( black pegs and white pegs ) . the first score ( black pegs )has the same meaning as in msp - black , and we maintain these scores unchanged from the previous construction . by doing so ,we already get the desired set of solutions , hence we merely have to show how to fill out the remaining scores ( white pegs ) without losing solutions . referring to the proof of , we change the rating in from to , because every in the guess is either correct at the correct place , or redundant .the rating in is changed from to .indeed , let be any other variable ( distinct from ) , so that . then , exactly one between and is a misplaced , which can be switched with the misplaced from either or .all the other s in are either correct at the correct place , or redundant . similarly , the rating in ( respectively , ) changes from to ( respectively , ) . indeed , exactly two ( respectively , one ) s are in a wrong position in .if either or is wrong , then both and are wrong and of opposite colors , hence they can be switched .once again , all the other s in are either correct at the correct place , or redundant .all the claims easily follow from , , , , , and the -completeness of # -sat under parsimonious reductions . [ [ example . ] ] * _ example . _ * + + + + + + + + + + + + as an illustration of , we show how the boolean formula is translated into a set of queries for -msp . for visual convenience , s and s are represented as white and black circles , respectively . [ cols="<,<,<,<,<,<,<,<,<,<,<,<,<,<,<,<,<,<,<,<,<,<,<,<,<,<,^",options="header " , ] the solutions to both problems are exactly ten , and are listed below .we remark that , in order to determine the values of the auxiliary variables , and when a solution to the boolean satisfiability problem is given , it is sufficient to check how many literals of are satisfied . is true if and only if exactly one literal is satisfied , is false if and only if all three literals are satisfied , and is true if and only if .we describe some applications of to several complexity problems . [ cor : npc ] -msp ,-msp - black and msp - white are -complete .parsimonious reductions among problems are _ a fortiori _ karp reductions among the corresponding problems .so far , we made no assumptions on the queries in our problem instances , which leads to a more general but somewhat fictitious theory . since in a real game of mastermind the codebreakers queries are guaranteed to have at least a solution ( i.e. , the secret code chosen by the codemaker ) , more often than not the codebreaker is in a position to exploit this information to his advantage .however , we show that such information does not make counting problems substantially easier .# -msp , # -msp - black and # msp - white , with the promise that the number of solutions is at least , are all -complete problems under turing reductions , for every .let # match be the problem of counting the matchings of any size in a given graph , which is known to be -complete under turing reductions .let be the problem # -msp ( respectively , # -msp - black , # msp - white ) restricted to instances with at least solutions , and let us show that # match . given a graph , if it has fewer than edges , we can count all the matchings in linear time .otherwise , there must be at least matchings ( each edge yields at least the matching ) , so we parsimoniously map into an instance of via , we call an oracle for , and output its answer . the following result , for , settles an issue concerning the determination of msp instances with unique solution , which was left unsolved in .we actually prove more : even if a solution is given as input , it is hard to determine if it is unique .therefore , not only solving mastermind puzzles is hard , but _ designing _puzzles around a solution is also hard .[ cor : zhang ] for every , the problem of deciding if an instance of -msp , -msp - black or msp - white has strictly more than solutions is -complete , even if solutions are explicitly given as input .not only do the parsimonious reductions given in preserve the number of solutions , but they actually yield an explicit polynomial - time computable transformation of solutions ( cf . the remark at the end of ) .hence , the involved -complete problems are also -complete as function problems , and their decision - counterparts are accordingly -complete .remarkably , even if the codebreaker somehow knows that his previous queries are sufficient to uniquely determine the solution , he still has a hard time finding it .[ cor : unique ] the promise problem of finding the solution to an instance of -msp , -msp - black or msp - white , when the solution itself is known to be unique , is -hard under randomized turing reductions .it is known that sat usat , where usat is the promise version of sat whose input formulas are known to have either zero or one satisfying assignments .let be the composition of this reduction with the parsimonious one from boolean formulas to instances of -msp ( respectively , -msp - black , msp - white ) given by .our turing reduction proceeds as follows : given a boolean formula , compute and submit it to an oracle that finds a correct solution of -msp ( respectively , -msp - black , msp - white ) when it is unique .then output yes if and only if is indeed a solution of , which can be checked in polynomial time .in we showed that # -msp - white is solvable in polynomial time when is a constant , while in we proved that it becomes -complete when . by making the code polynomially longer and filling the extra space with a fresh color , we can easily prove that also # )\right)$]-msp - white is -complete , for every constant .an obvious question arises : what is the lowest order of growth of such that # -msp - white is -complete ?we observed that # msp is a subproblem of several heuristics aimed at optimally guessing the secret code , but is # msp really inherent in the game ?perhaps the hardness of mastermind is not captured by # msp or even msp , and there are cleverer , yet unknown , ways to play .+ _ input : _ , where is an instance of msp , and .+ _ output : _yes if the codebreaker has a strategy to guess the secret code in at most attempts , using information from .no otherwise . to make the game more fun to play for the codemaker ,whose role is otherwise too passive , we could let him change the secret code at every turn , coherently with the ratings of the previous guesses of the codebreaker . as a result, nothing changes for the codebreaker , except that he may perceive to be quite unlucky with his guesses , but the codemaker s game becomes rather interesting : by , even deciding if he has a non - trivial move is -complete , but he can potentially force the codebreaker to always work in the worst - case scenario , and make him pay for his mistakes .we call this variation _ adaptive mastermind_.
|
mastermind is a popular board game released in 1971 , where a codemaker chooses a secret pattern of colored pegs , and a codebreaker has to guess it in several trials . after each attempt , the codebreaker gets a response from the codemaker containing some information on the number of correctly guessed pegs . the search space is thus reduced at each turn , and the game continues until the codebreaker is able to find the correct code , or runs out of trials . in this paper we study several variations of # msp , the problem of computing the size of the search space resulting from a given ( possibly fictitious ) sequence of guesses and responses . our main contribution is a proof of the -completeness of # msp under parsimonious reductions , which settles an open problem posed by stuckman and zhang in 2005 , concerning the complexity of deciding if the secret code is uniquely determined by the previous guesses and responses . similarly , # msp stays -complete under turing reductions even with the promise that the search space has at least elements , for any constant . ( in a regular game of mastermind , . ) all our hardness results hold even in the most restrictive setting , in which there are only two available peg colors , and also if the codemaker s responses contain less information , for instance like in the so - called single - count ( black peg ) mastermind variation .
|
online video repositories like youtube , dailymotion etc have been experiencing an explosion of user - generated videos .such videos are often shot / recorded from the television by users , and uploaded onto these sites .they have very little metadata like dialogue scripts , or a textual summary / representation of the content .when an user searches these repositories by keywords , ( s)he is suggested hundreds of videos , out of which ( s)he may choose a small number for viewing . this has given rise to the topic of _ video summarization _ , which aims to provide the user a short but comprehensive _ summary _ of the video .however , the current state - of - the - art mostly provides a few keyframes as summary , which may not have much semantic significance .the high - level semantic information of videos that is most important to users is carried by _entities_- such as persons or other objects . with the recent progress in object detection in single images and videos , it is now possible to have a high - level representation of videos in terms of such entities .one effective way of summarization is to have a list of entities that appear frequently in a video .further , an user may want to watch only a part of a video , for example wherever a particular person ( or set of persons ) appears , which motivate the tasks of entity discovery and entity - driven summarization of videos .the problem of _ automated discovery of persons from videos along with all their occurrences _ has attracted a lot of interest in video analytics .existing attempts try to leverage meta - data such as scripts and hence do not apply to videos available on the wild , such as tv - series episodes uploaded by viewers on youtube ( which have no such meta - data ) . in this paper, we pose this problem as _ tracklet clustering _, as done in .our goal is to design algorithms for tracklet clustering which can work on long videos .tracklets are formed by detections of an entity ( say a person ) from a short contiguous sequence of 10 - 20 video frames .they have complex spatio - temporal properties .we should be able to handle any type of entity , not just person . given a video in the wild it is unlikely that the number of entities will be known , so the method should automatically adapt to unknown number of entities . to this endwe advocate a _bayesian non - parametric _ clustering approach to tracklet clustering and study its effectiveness in automated discovery of entities with all their occurrences in long videos .the main challenges are in modeling the spatio - temporal properties . to the best of our knowledgethis problem has not been studied either in machine learning or in computer vision community . to explain the spatio - temporal properties we introduce some definitions .a _ track _ is formed by detecting entities ( like people s faces ) in each video frame , and associating detections across a contiguous sequence of frames ( typically a few hundreds in a tv series ) based on _ appearance _ and _ spatio - temporal _ locality .each track corresponds to a particular entity , like a person in a tv series .forming long tracks is often difficult , especially if there are multiple detections per frame .this can be solved hierarchically , by associating the detections in a short window of frames ( typically 10 - 20 ) to form _ tracklets _ and then linking the tracklets from successive windows to form tracks .the _ short - range association of tracklets _ to form tracks is known as _tracking_. but in a tv series video , the same person may appear in different ( non - contiguous ) parts of the video , and so we need to associate tracklets on a _ long - range _ basis also ( see figure [ fig : asso ] ) .moreover the task is complicated by lots of _ false detections _ which act as spoilers .finally , the task becomes more difficult on streaming videos , where only one pass is possible over the sequence .a major cue for this task comes from a very fundamental property of videos : _temporal coherence_(tc ) .this property manifests itself at detection - level as well as tracklet - level ; at feature - level as well as at semantic - level . at detection - levelthis property implies that the visual features of the detections ( eg . appearance of an entity ) are almost unchanged across a tracklet ( see fig .2 ) . at tracklet - levelit implies that _ spatio - temporally close ( but non - overlapping ) tracklets are likely to belong to the same entity _ ( fig .[ fig : tc ] ) .additionally , _ overlapping tracklets ( that span the same frames ) , can not belong to the same entity_. a tracklet can be easily represented as all the associated detections are very similar ( due to detection - level tc ) .such representation is not easy for a long track where the appearances of the detections may gradually change .* contribution * broadly , this paper has two major contributions : it presents the first bayesian nonparametric models for tc in videos , and also the first entity - driven approach to video modelling . to these ends ,we explore tracklet clustering , an active area of research in computer vision , and advocate a bayesian non - parametric(bnp ) approach for it .we apply it to an important open problem : discovering entities ( like persons ) and all their occurrences from long videos , in absence of any meta - data , e.g. scripts .we use a simple and generic representation leading to representing a video by a matrix , whose columns represent individual tracklets ( unlike other works which represent an individual detection by a matrix column , and then try to encode the tracklet membership information ) .we propose temporally coherent - chinese restaurant process(tc - crp ) , a bnp prior for enforcing coherence on the tracklets .our method yields a superior clustering of tracklets over several baselines especially on long videos . as an advantage it does not need the number of clusters in advance .it is also able to automatically filter out false detections , and perform the same task on _ streaming videos _ , which are impossible for existing methods of tracklet clustering .we extend tc - crp to the temporally coherent chinese restaurant franchise ( tc - crf ) , that jointly models short video segments and further improves the results .we show that the proposed methods can be applied to entity - driven video summarization , by selecting a few representative segments of the video in terms of the discovered entities .in this section , we elaborate on our task of tracklet clustering for entity discovery in videos . in this work , given a video , we fix beforehand the _ type of entity _( eg . person / face , cars , planes , trees ) we are interested in , and choose an appropriate detector like , which is run on every frame of the input video . the detections in successive frames are then linked based on spatial locality , to obtain tracklets . at most detections from frames are linked like this .the tracklets of length less than are discarded , hence all tracklets consist of detections .we restrict the length of tracklets so that the appearance of the detections remain almost unchanged ( due to detection - level tc ) , which facilitates tracklet representation . at work with the individual detections .we represent a detection by a vector of dimension .this can be done by downscaling a rectangular detection to square and then reshaping it to a -dimensional vector of pixel intensity values ( or some other features if deemed appropriate ) .each tracklet is a collection of detections .let the tracklet be represented by .so finally we have vectors ( : number of tracklets ) . the tracklets can be sorted topologically based on their starting and ending frame indices .each tracklet has a _ predecessor tracklet _ and a _ successor tracklet _ .also each tracklet has a conflicting set of tracklets which span frame(s ) that overlap with the frames spanned by .each detection ( and tracklet ) is associated with an entity , which are unknown in number , but presumably much less than the number of detections ( and tracklets ) .these entities also are represented by vectors , say .each tracklet is associated with an entity indexed by , i.e. .let each video be represented as a sequence of -dimensional vectors along with the set .we aim to learn the vectors and the assignment variables .in addition , we have _ constraints _ arising out of _ temporal coherence _ and other properties of videos .each tracklet is likely to be associated with the entity that its predecessor or successor is associated with , except at shot / scene changepoints .moreover , a tracklet can not share an entity with its conflicting tracklets , as the same entity can not occur twice in the same frame .this notion is considered in relevant literature .mathematically , the constraints are : _ learning a -vector is equivalent to discovering an entity , and its associated tracklets are discovered by learning the set ._ these constraints give the task a flavour of non - parametric constrained clustering with must - link and dont - link constraints . finally , the video frames can be grouped into short segments , based on the starting frame numbers of the tracklets .consider two successive tracklets and , with starting frames and .if the gap between frames and is larger than some threshold , then we consider a new temporal segment of the video starting from , and add to a list of _ changepoints _ ( cp ) .the beginning of a new temporal segment does not necessarily mean a scene change , the large gap between frames and may be caused by failure of detection or tracklet creation .the segment index of each tracklet is denoted by .* person discovery in videos * is a task which has recently received attention in computer vision .cast listing is aimed to choose a representative subset of the face detections or face tracks in a movie / tv series episode .another task is to label _ all the detections _ in a video , but this requires movie scripts or labelled training videos having the same characters .scene segmentation and person discovery are done simultaneously using a generative model in , but once again with the help of scripts .an unsupervised version of this task is considered in , which performs * face clustering * in presence of spatio - temporal constraints as already discussed . for this purposethey use a markov random field , and encode the constraints as clique potentials . another recent approach to face clusteringis which incorporates some spatio - temporal constraints into subspace clustering .* tracklet association * _ tracking _ is a core topic in computer vision , in which a target object is located in each frame based on appearance similarity and spatio- temporal locality .a more advanced task is _ multi - target tracking _ , in which several targets are present per frame .a tracking paradigm that is particularly helpful in multi - target tracking is _ tracking by detection _ , where object - specific detectors like are run per frame ( or on a subset of frames ) , and the detection responses are linked to form tracks . from this came the concept of _ tracklet _ which attempts to do the linking hierarchically .this requires pairwise similarity measures between tracklets .multi - target tracking via tracklets is usually cast as bipartite matching , which is solved using hungarian algorithm .tracklet association and face clustering are done simultaneously in using hmrf .the main difference of face / tracklet clustering and person discovery is that , the number of clusters to be formed is not known in the latter .independent of videos , * constrained clustering * is itself a field of research .constraints are usually _ must - link and dont - link _ , which specify pairs which should be assigned the same cluster , or must not be assigned the same cluster .a detailed survey is found in .the constraints can be hard or soft / probabilistic .constrained spectral clustering has also been studied recently , which allow constrained clustering of datapoints based on arbitrary similarity measures .all the above methods suffer from a major defect- the number of clusters needs to be known beforehand . a way to avoid this is provided by * dirichlet process * , which is able to identify the number of clusters from the data .it is a mixture model with infinite number of mixture components , and each datapoint is assigned to one component .a limitation of dp is that it is exchangeable , and can not capture sequential structure in the data . for this purpose ,a markovian variation was proposed : hierarchical dirichlet process- hidden markov model ( hdp - hmm ) .a variant of this is the _ sticky _ hdp - hmm ( shdp - hmm ) , which was proposed for temporal coherence in speech data for the task of speaker diarization , based on the observation that successive datapoints are likely to be from the same speaker and so should be assigned to the same component .another bayesian nonparametric approach for sequential data is the distance - dependent chinese restaurant process ( ddcrp ) , which defines distances between every pair of datapoints , and each point is linked to another with probability proportional to such distances .a bnp model for subset selection is indian buffet process ( ibp ) , a generative process for a sequence of binary vectors .this has been used for selecting a sparse subset of mixture components ( topics ) in focussed topic modelling as the compound dirichlet mixture model .finally , * video summarization * has been studied for a few years in the computer vision community .the aim is to provide a short but comprehensive summary of videos .this summary is usually in the form of a few _ keyframes _ , and sometimes as a short segment of the video around these keyframes .a recent example is which models a video as a matrix , each frame as a column , and each keyframe as a _ basis vector _ , in terms of which the other columns are expressed .a more recent work considers a kernel matrix to encode similarities between pairs of frames , uses it for _ temporal segmentation _ of the video , assigns an importance label to each of these segments using an svm ( trained from segmented and labelled videos ) , and creates the summary with the important segments. however , such summaries are in terms of low - level visual features , rather than high - level semantic features which humans use . an attempt to bridge this gapwas made in , which defined movie scenes and summaries in terms of characters .this work used face detections along with _ movie scripts _ for semantic segmentation into shots and scenes , which were used for summarization .we now explain our bayesian nonparametric model tc - crp to handle the spatio - temporal constraints ( eq [ eq : cons ] ) for tracklet clustering , and describe a generative process for videos based on tracklets . in section [ sec : def ] , we discussed the vectors each of which represent an entity . in this paperwe consider a bayesian approach with gaussian mixture components to account for the variations in visual features of the detections , say face detections of a person .as already mentioned , number of components is not known beforehand , and must be discovered from the data .that is why we consider nonparametric bayesian modelling .also , as we shall see , this route allows us to elegantly model the temporal coherence constraints . in this approach , we shall represent entities as mixture components and tracklets as draws from such mixture components . dirichlet process has become an important clustering tool in recent years .its greatest strength is that unlike k - means , it is able to discover the correct number of clusters . dirichlet process is a distribution over distributions over a measurable space .a discrete distribution is said to be distributed as over space if for every finite partition of as , the quantity is distributed as , where is a scalar called _ concentration parameter _ , and is a distribution over called base distribution .a distribution is a discrete distribution , with infinite support set , which are draws from , called _we consider to be a multivariate gaussian with parameters and .each atom corresponds to an entity ( eg . a person ) .the generative process for the set is then as follows : \ ] ] here is an atom . is a tracklet representation corresponding to the entity , and its slight variation from ( due to effects like lighting and pose variation ) is modelled using . using the constructive definition of dirichlet process , called the stick - breaking process , the above process can also be written equivalently as \end{aligned}\ ] ] here is a distribution over integers , and is an integer that indexes the component corresponding to the tracklet .our aim is to discover the values , which will give us the entities , and also to find the values , which define a clustering of the tracklets . for this purposewe use collapsed gibbs sampling , where we integrate out the in equation [ eq : dp1 ] or in equation [ eq : dp2 ] .the gibbs sampling equations and are given in .for , here , is the data likelihood term .we focus on the part to model tc . in the generative process ( equation [ eq : dp2 ] )all the are drawn iid conditioned on .such models are called _completely exchangeable_. this is , however , often not a good idea for sequential data such as videos . in markovian models like sticky hdp - hmm, is drawn conditioned on and . in case of dp , the independence among -sis lost on integrating out .after integration the generative process of eq [ eq : dp2 ] can be redefined as the predictive distribution for for dirichlet process is known as chinese restaurant process ( crp ) .it is defined as where is the number of times the value is taken in the set .we now modify crp to handle the spatio - temporal cues ( eq [ eq : cons ] ) mentioned in the previous section . in the generative process , we define with respect to , similar to the block exchangeable mixture model as defined in . here , with each we associate a _ binary change variable _ . if then , i.e the tracklet identity is maintained .but if , a new value of is sampled . note that every tracklet has a temporal predecessor . however , if this predecessor is spatio - temporally close , then it is more likely to have the same label .so , the probability distribution of change variable should depend on this closeness . in tc - crp ,we use two values ( and ) for the bernoulli parameter for the change variables .we put a threshold on the spatio - temporal distance between and , and choose a bernoulli parameter for based on whether this threshold is exceeded or not .note that maintaining tracklet identity by setting is equivalent to _tracking_. several datapoints ( tracklets ) arise due to false detections .we need a way to model these .since these are very different from the base mean , we consider a separate component with mean and a very large covariance , which can account for such variations .the predictive probability function(ppf ) for tc - crp is defined as follows : where is the set of values of for the set of tracklets that overlap with , and is the number of points ( ) where and .the first rule ensures that two overlapping tracklets can not have same value of .the second rule accounts for false tracklets .the third and fourth rules define a crp restricted to the changepoints where .the final tracklet generative process is as follows : draw where is the ppf for tc - crp , defined in eq [ eq : ppf1 ] .inference in tc - crp can be performed easily through gibbs sampling .we need to infer , and . as and coupled , we sample them in a block for each $ ] as done in .if and , then we must have and . if and , then , and is sampled from . in case and , then with probability proportional to . if then if , and otherwise .if then is governed by tc - crp . for sampling , we make use of the conjugate prior formula of gaussians , to obtain the gaussian posterior with mean where , and .finally , we update the hyperparameters and after every iteration , based on the learned values of , using maximum likelihood estimate ., can also be updated , but in our implementation we set them to and respectively , based on empirical evaluation on one held - out video .the threshold was also similarly fixed .in the previous section , we considered the entire video as a single block , as the tccrp ppf for any tracklet involves -values from all the previously seen tracklets throughout the video .however , this need not be very accurate , as in a particular part of the video some mixture components ( entities ) may be more common than anywhere else , and for any , may depend more heavily on the -values in temporally close tracklets than the ones far away .this is because , a tv - series video consists of _ temporal segments _ like scenes and shots , each characterized by a subset of persons ( encoded by binary vector ) .the tracklets attached to a segment can not be associated with persons not listed by . to capture this notion we propose a new model : temporally coherent chinese restaurant franchise ( tc - crf ) to model a video temporally segmented by ( see section [ sec : def ] ) .chinese restaurant process is the ppf associated with dirichlet process .hierarchical dirichlet process ( hdp ) aimed at modelling _ grouped data sharing same mixture components_. it assumes a group - specific distribution for every group .the generative process is : ; z_i \sim \pi_{s(i ) } , y_i \sim \mathcal{n}(\phi_{z_i},\sigma_1 ) \forall i \in [ 1,n]\end{aligned}\ ] ] where datapoint belongs to the group .the ppf corresponding to this process is obtained by marginalizing the distributions and , and is called the _ chinese restaurant franchise _ process , elaborated in . in our case , we can modify this ppf once again to incorporate tc , analogously to tc - crp , to have temporally coherent chinese restaurant franchise ( tc - crf ) process . in our case, a group corresponds to a temporal segment , and as already mentioned , we want a binary vector , which indicates the components that are active in segment .but hdp assumes that all the components are shared by all the groups , i.e. any particular component can be sampled in any of the groups .we can instead try _ sparse modelling _ by incorporating into the model , as done in for focussed topic models . for this purposewe put an ibp prior on the variables , where where is the number of times component has been sampled in all scenes before , and .the _ tc - crf _ ppf is then as follows : where , the index of the temporal segment to which the datapoint belongs . based on tc - crf , the generative process of a video , in terms of temporal segments and tracklets , is given below : draw where is the ppf for tc - crf , and is the temporal segment index associated with tracklet .inference in tc - crf can also be performed through gibbs sampling .we need to infer the variables , , and the components . in segment , for a datapoint where , a component may be sampled with , which is the number of times has been sampled within the same segment . if has never been sampled within the segment but has been sampled in other segments , , where is the number of segments where has been sampled ( corresponding to according to ibp ) , and is the crp parameter for sampling a new component .finally , a completely new component may be sampled with probability proportional to .note that .tc - crp draws inspirations from several recently proposed bayesian nonparametric models , but is different from each of them .it has three main characterestics : 1 ) changepoint - variables 2 ) temporal coherence and spatio - temporal cues 3 ) separate component for non - face tracklets .the concept of changepoint variable was used in block - exchangeable mixture model , which showed that this significantly speeds up the inference .but in bemm , the bernoulli parameter of changepoint variable depends on while in tc - crp it depends on . regarding spatio - temporal cues ,the concept of providing additional weightage to self - transition was introduced in sticky hdp - hmm , but this model does not consider change - point variables .moreover , it uses a transition distribution for each mixture component , which increases the model complexity . like bemm we avoid this step , andhence our ppf ( eq [ eq : ppf1 ] ) does not involve .ddcrp defines distances between every pair of datapoints , and associates a new datapoint with one of the previous ones ( ) based on this distance .here we consider distances between a point and its predecessor only . on the other hand ,ddcrp is unrelated to the original dp - based crp , as its ppf does not consider : the number of previous datapoints assigned to component .hence our method is significantly different from ddcrp .finally , the first two rules of tc - crp ppf are novel .tc - crf is inspired by hdp .however , once again there are three differences mentioned above hold good . in addition , the ppf of tc - crf itself is different from chinese restaurant franchise as described in .the original crf is defined in terms of two concepts : tables and dishes , where tables are local to individual restaurants ( data groups ) while dishes ( mixture components ) are global , shared across restaurants ( groups ) .also individual datapoints are assigned mixture components indirectly , through an intermediate assignment of tables .the concept of table , which comes due to marginalization of group - specific mixture distributions , results in complex book - keeping , and the ppf for datapoints is difficult to define .here we avoid this problem , by skipping tables and directly assigning mixture components to datapoints in eq [ eq : ppf3 ] .inspiration of tc - crf is also drawn from ibp - compound dirichlet process .but the inference process of is complex , since the convolution of the dp - distributed mixture distribution and the sparse binary vector is difficult to marginalize by integration .we avoid this step by directly defining the ppf ( eq [ eq : ppf3 ] ) instead of taking the dp route .this approach of directly defining the ppf was taken for dd - crp also .one particular entity discovery task that has recently received a lot of attention is person discovery from movies/ tv series .we carried out extensive experiments for person discovery on tv series videos of various lengths .we collected three episodes of the big bang theory ( season 1 ) .each episode is 20 - 22 minutes long , and has 7 - 8 characters ( occurring in at least 100 frames ) .we also collected 6 episodes of the famous indian tv series `` the mahabharata '' from youtube .each episode of this series is 40 - 45 minutes long , and have 15 - 25 prominent characters .so here , each character is an entity .these videos are much longer than those studied in similar works like , and have more characters .also , these videos are challenging because of the somewhat low quality and motion blur .transcripts or labeled training sets are unavailable for all these videos . as usual in the literature , we represent the persons with their faces .we obtained face detections by running the opencv face detector on each frame separately .as described in section [ sec : def ] the face detections were all converted to grayscale , scaled down to , and reshaped to form -dimensional vectors .we considered tracklets of size and discarded smaller ones .the dataset details are given in table 1 ..details of datasets [ cols="^,^,^,^,^,^",options="header " , ]in this paper , we considered an entity - driven approach to video modelling .we represented videos as sequences of tracklets , with each tracklet associated with an entity .we defined entity discovery as the task of discovering the repeatedly appearing entities in the videos , along with all their appearances , and cast this as tracklet clustering .we considered a bayesian nonparametric approach to tracklet clustering which can automatically discover the number of clusters to be formed .we leveraged the temporal coherence property of videos to improve the clustering by our first model : tc - crp .the second model tc - crf was a natural extension to tc - crp , to jointly model short temporal segments within a video , and further improve entity discovery .these methods were empirically shown to have several additional abilities like performing online entity discovery efficiently , and detecting false tracklets .finally , we used these results for semantic video summarization in terms of the discovered entities .
|
a video can be represented as a sequence of tracklets , each spanning 10 - 20 frames , and associated with one entity ( eg . a person ) . the task of _ entity discovery _ in videos can be naturally posed as tracklet clustering . we approach this task by leveraging _ temporal coherence_(tc ) : the fundamental property of videos that each tracklet is likely to be associated with the same entity as its temporal neighbors . our major contributions are the first bayesian nonparametric models for tc at tracklet - level . we extend chinese restaurant process ( crp ) to propose tc - crp , and further to temporally coherent chinese restaurant franchise ( tc - crf ) to jointly model short temporal segments . on the task of discovering persons in tv serial videos without meta - data like scripts , these methods show considerable improvement in cluster purity and person coverage compared to state - of - the - art approaches to tracklet clustering . we represent entities with mixture components , and tracklets with vectors of very generic features , which can work for any type of entity ( not necessarily person ) . the proposed methods can perform online tracklet clustering on streaming videos with little performance deterioration unlike existing approaches , and can automatically reject tracklets resulting from false detections . finally we discuss entity - driven video summarization- where some temporal segments of the video are selected automatically based on the discovered entities .
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.