article
stringlengths 0
456k
| abstract
stringlengths 0
65.5k
|
---|---|
the fact that the capacitance is a geometrical factor is an important property in courses on electricity and magnetism. derivations of this property are usually based on the principle of superposition and the green function formalismuehara , lorenzo .nevertheless , such derivations are not convenient for calculations .alternative techniques to calculate the capacitance coefficients based on the green function formalism and other methods have been developed . in this paperwe give a simple proof of the geometrical nature of the capacitance coefficients based on laplace s equation .our approach permits to demonstrate many properties of the capacitance matrix .the method is illustrated by reproducing some well known results , and applications in complex situations are suggested .we consider a system of internal conductors and an external conductor that encloses them .the potential on each internal conductor is denoted by , .the surface of the external conductor is denoted by , and its potential is denoted by ( see fig . [fig : ncond ] ) .one reason to introduce the external conductor is that it provides a closed boundary to ensure the uniqueness of the solutions .in addition , many capacitors contain an enclosing conductor as for the case of spherical concentric shells .as we shall see , the case in which there is no external conductor can be obtained in the appropriate limit .internal conductors with conductor enclosing them .the normals with point outward with respect to the conductors and inward with respect to the volume ( defined by the region in white ) . the surfaces with are slightly bigger than the ones corresponding to the conductors .in contrast , the surface is slightly smaller than the surface of the external conductor.,width=245 ] the surface charge density on an electrostatic conductor is given by is an unit vector normal to the surface pointing outward with respect to the conductor ( see fig .[ fig : ncond ] ) ; and denote the electrostatic field and potential respectively .the charge on each conductor is given by surface encloses the conductor and is arbitrarily near and locally parallel to the real surface of the conductor ( see fig .fig : ncond). we define the total surface as volume defined by the surface is the one delimited by the external surface and the internal surfaces .the potential in such a volume must satisfy laplace s equation with the boundary conditions of the linearity of laplace s equation , the solution for can be parameterized as the are functions that satisfy laplace s equation in the volume with the boundary conditions solutions for ensure that is the solution of laplace s equation with the boundary conditions in eq .( [ cond front fi ] ) .the uniqueness theorem also ensures that the solution for each is unique ( as is the solution for ) .the boundary conditions ( [ cond fi ] ) indicate that the functions depend only on the geometry .if we apply the gradient operator in eq .( [ fi = fi fi ] ) and substitute the result into eq .( [ qigen ] ) , we obtain [ q_i_sistema_cargas ] which shows that the factors are exclusively geometric. the symmetry of the associated matrix can be obtained by purely geometrical arguments .we start from the definition of in eq .( q_i_sistema_cargas.b ) and find where we have used the fact that on the surface and zero on the other surfaces . from gausss theorem we obtain \,dv.\end{aligned}\ ] ] because in , it follows that equation implies that is symmetric, that is , for certain configuration of conductors , consider two sets of charges and potentials and . from eqs .( [ q_i_sistema_cargas ] ) and ( [ prop2 ] ) we have that which implies that equation ( [ recip ] ) is known as the reciprocity theorem. when one or more of the internal conductors has an empty cavity , is well known that there is no charge induced on the surface of the cavity berkeley , jack ( let us call it ) . consequently , although is part of the surface of the conductor , such a surface can be excluded in the integration in eq .( [ qigen ] ) .in addition , we can check by uniqueness that in the volume of the cavity so that in such a volume , and hence it can be excluded from the volume integral ( [ cij 2 ] ) . in conclusion neither nor contribute in this case .the situation is different if there is another conductor in the cavity . in this case, the surface of the cavity contributes in eq .( [ qigen ] ) .similarly the volume between the cavity and the embedded conductor contributes in the volume integral ( [ cij 2 ] ) .the arguments can be extended for successive embedding of conductors in cavities as shown by fig .[ fig : ncond2 ] or for conductors with several cavities. corresponds to the region in white .the regions corresponding to empty cavities ( and their associated surfaces and volumes ) can be excluded without affecting the calculations . in this picture cavitya is empty and its surface and volume need not be considered for calculations.,width=245 ]we define a function see from eq .( [ cond fi ] ) that throughout the surface , we see by uniqueness that in the volume from which we find that in addition , by summing over in eq .( [ q_i_sistema_cargas.b ] ) and taking into account eq .( [ propf ] ) , we find that symmetry of the elements leads also to equations and imply that the sum of the elements over any row or column of the matrix is zero . appendix [ ap : pruebas ] gives some proofs of consistency for these important properties .taking into account the symmetrical nature of the matrix with dimensions and the constraints in eq .( [ prop1 ] ) , we see that for a system of conductors surrounded by another conductor , the number of independent capacitance coefficients is -(n+1)=\frac{n(n+1)}{2 } , \label{grad lib}\ ] ] other important properties are that [ prop3 ] equation follows straightforwardly from eq .( [ cij 2 ] ) . to demonstrate eq ., we recall that the solutions of laplace s equation can not have local minima nor local maxima in the volume in which the equation is valid. consequently , the functions must lie in the interval because on any surface for , we see that acquires its minimum value on such surfaces .therefore the function should point outward with respect to the conductor for . hence we substitute eq .( [ gradf ] ) into eq .( [ q_i_sistema_cargas ] ) and obtain for .an additional derivation of the fact that can be obtained by taking into account that acquires its maximum value on the surface .equation ( [ prop1 ] ) can be rewritten as from eq .( [ prop3 ] ) we have that for and . hence [ prop4all ] the following properties follow from eqs .( [ prop2 ] ) , , , and [ prop5all ] .a particularly interesting case arises when the external conductor is at zero potential .in such a case , although the elements of the form do not necessarily vanish , they do not appear in the contributions to the charge on the internal conductors as can be seen from eq .( q_i_sistema_cargas ) by setting . for this reason ,the capacitance matrix used to describe free conductors ( that is , not surrounded by another conductor ) has dimensions . illustrate our method by deriving the basic properties of a system of two conductors . these examples will show the usefulness of eq .( q_i_sistema_cargas ) and some of the properties derived from our approach .we analyze a single internal conductor with an external conductor that is , .the internal conductor is labeled as conductor 1 . from eqs .( prop2 ) and ( [ prop1 ] ) we have therefore , there is only one independent coefficient , say ( in agreement with eq . with ) .the charges on the internal and external conductors can be calculated from eq .( [ q_i_sistema_cargas ] ) equation is consistent with eq .( [ ext int ] ) and shows that the charge induced on the surface of the cavity of the conductor 2 is opposite to the charge on the conductor 1 . in table[ tab : resumen ] we display the results of three well known configurations of two conductors .the second column shows the functions , which can be found by laplace s equation ( [ cond fi ] ) and used to calculate with eq .( [ q_i_sistema_cargas ] ) . . and factors for three systems of two conductors with and .we neglect edge effects for the cylinders and planes . [ cols="<,^,^",options="header " , ]we use our approach to study a system with embedded of conductors . in addition , the case of two internal conductors is examined , and we show the limit in which the configuration of two conductors without external conductor is obtained .these examples show how the properties we have derived can be used to calculate the capacitance coefficients . _example 1_. consider two concentric spherical shells with radii and and a solid spherical conductor ( concentric with the others ) with radius such that .the potentials are denoted by , , and respectively .the general solution of laplace s equation for can be written as from eqs .( [ cond fi ] ) and ( [ f param ] ) we obtain and although can be obtained the same way , it is easier to extract it from eq .( [ propf ] ) .the result is the nine capacitance coefficients can be evaluated explicitly from eq .( q_i_sistema_cargas ) , but it is easier to use eqs .( [ prop2 ] ) and ( prop1 ) and to take into account that ( for ) .we have [ prop_3cap_contenidos ] from eq .( [ q_i_sistema_cargas ] ) the charge on each conductor is [ sol_3cond_contenidos ] , we only have to calculate and . the result gives if , we find that and . it can be shown that eqs .( [ prop_3cap_contenidos ] ) and ( sol_3cond_contenidos ) are valid even if the conductors are neither spherical nor concentric , because those equations come from eqs . , , and which are general properties independent of specific geometries . _example 2_. consider two internal conductors and a grounded external conductor .as customary , we begin with . by transfering charge from one internal conductor to the other we keep . from eq .( [ q_i_sistema_cargas.a ] ) and defining find we have used eq .( [ prop1 ] ) .similarly , and using again eq .( [ prop1 ] ) we find the system is neutral and hence eq .( [ fi1v ] ) into eq .( [ q1c ] ) we obtain because only three of the coefficients in the definition of are independent . from eqs .we see that this effective capacitance is non negative .the procedure is not valid if , in that case we see by using eqs .( [ prop1 ] ) and that , and from eq .( [ q1c ] ) we find which is also non negative . the limit in which there is no external conductor is obtained by taking all the dimensions of the cavity to infinity while keeping the external conductor grounded as discussed in ref . .we have used an approach based on laplace s equation to demonstrate that the capacitance matrix depends only on purely geometrical factors .the explicit use of laplace s equation permits us to demonstrate many properties of the capacitance coefficients .the geometrical relations and properties shown here permits us to simplify many calculations of the capacitance coefficients .we emphasize that laplace s equations necessary for finding the capacitance coefficients are purely geometrical as can be seen from eqs . and .laplace s equation is usually easier than green function formalism for either analytical or numerical calculations .appendix [ ap : pruebas ] shows some proofs of consistency to enhance the physical insight and the reliability of our method .a proof of consistency for the identity ( [ prop1 ] ) , is achieved by using eq . ( [ q_i_sistema_cargas ] ) to calculate the total charge on the internal conductors . \label{qint}\]]we use eq .( [ prop1 ] ) to find that eq .( [ qint int ] ) requires many fewer elements of the matrix than eq .( [ qint ] ) .this difference becomes more significant as increases .if we again use eq .( [ q_i_sistema_cargas ] ) , we can find the charge on the cavity of the external conductor therefore property that can also be obtained from gauss s law . proof of consistency for eq .( [ prop1 ] ) is found by employing eqs .( [ q_i_sistema_cargas ] ) and ( [ qint int ] ) to calculate ( taking into account that eq. comes directly from eq . ) we utilize eq .( [ fi = fi fi ] ) to write as this relation is clearly correct because points inward with respect to the volume . a proof of consistency for eq .( [ cij 2 ] ) that shows the symmetry of can be obtained by calculating the electrostatic internal energy , which in terms of the electric field is , \end{aligned}\ ] ] where we have used eq .( [ fi = fi fi ] ) . from eq .( [ cij 2 ] ) we find consistent with standard results. enhance the understanding of this approach and its advantages , we give some general suggestions for the reader . 1 .implement a numerical method to solve the laplace s equation ( cond fi ) for the functions associated with a nontrivial geometry ( for example , two non - concentric ellipsoids ) .use eqs .( [ propf ] ) and to either simplify your calculations or to check the consistency of your results . then use eq .( [ q_i_sistema_cargas ] ) to obtain the factors numerically .( [ prop2 ] ) and eqs . either to simplify your calculations or to check the consistency of your results .we have emphasized that to calculate the total charge on the internal conductors eq .( [ qint int ] ) requires many fewer elements than eq .( [ qint ] ) .how many fewer elements are required for an arbitrary value of ?3 . for a successive embedding of concentric spherical shells ,calculate the capacitance coefficients for an arbitrary number of spheres .4 . show that for the successive embedding of three conductors with arbitrary shapes , eqs . andstill hold .generalize your results for an arbitrary number of conductors .99 w. taussig scott , _ the physics of electricity and magnetism _ ( john wiley & sons , new york , 1966 ) , 2nd ed . ; gaylord p. harnwell , _ principles of electricity and electromagnetism _ ( mcgraw - hill , new york , 1949 ) ; leigh page and norman i. adams jr . , _ principles of electricity _( d. van nostrand , new jersey , 1958 ) , 3rd ed . ; a. n. matveev , _ electricity and magnetism _( mir , moscow , 1988 ) . is not necessarily the total charge on the external conductor , but the charge accumulated on the surface of the cavity that encloses the other conductors .the value of the charge is calculated with the surface integral ( [ qigen ] ) , which for the case of the internal conductors encompasses the whole surface , but for the external conductor is only the surface of the cavity that encloses the other conductors .equation ( [ cij 2 ] ) is an integral over the volume for the factors .we might be tempted to use gauss theorem to obtain an integral of the volume directly from eq .( [ q_i_sistema_cargas ] ) .however , is not defined in the region inside the conductors .the gradient of in eq .( [ q_i_sistema_cargas ] ) is evaluated in an external neighborhood of the conductor surface . by uniqueness ,the solution for this problem is equivalent to the solution for a system consisting of the same conductors contained in the cavity of a surrounding conductor , such that all the dimensions of the cavity tend to infinity , and the potential of the external conductor is set to zero . for a derivation of some of these results based on the energy of the electrostatic fieldsee l. d. landau , e. m. lifshitz , and l. p. pitaevskii , _ electrodynamics of continuous media _( elsevier butterworth - heinemann , 1984 ) , 2nd ed . | the fact that the capacitance coefficients for a set of conductors are geometrical factors is derived in most electricity and magnetism textbooks . we present an alternative derivation based on laplace s equation that is accessible for an intermediate course on electricity and magnetism . the properties of laplace s equation permits to prove many properties of the capacitance matrix . some examples are given to illustrate the usefulness of such properties . |
for heavy charged particle radiotherapy with protons and ions , broad - beam delivery methods ( coutrakon 1991 , kanai 1999 ) are mature technologies with persistent advantages of simplicity and robustness over emerging technologies of pencil - beam scanning methods ( lambert 2005 ) . for a broad - beam system ,a variety of volumetrically enlarged standard beams are prepared , among which an optimum one is applied to a given target .target - specific customization is usually made with x - jaw , y - jaw , and multileaf collimators ( xjc , yjc , and mlc ) and custom - made accessories such as a patient collimator ( ptc ) and a range - compensating filter ( rcf ) .while the downstream collimators form sharp field edges , the upstream collimators , which are mainly for radiation - protection purposes , form gentle field edges. their combination will be useful for field patching techniques to form an irregular field with gently joining beams for improved robustness ( li 2007 ) . in treatment planning ,a variety of pencil - beam ( pb ) algorithms are used for dose calculation ( hong 1996 , kanematsu 2006 ) despite intrinsic difficulty with the pencil beams that develop to overreach lateral heterogeneity ( goitein 1978 , petti 1992 , kohno 2004 ) . for electron radiotherapy, the phase - space theory was rigorously applied to resolve the problem by periodical redefinition of ensemble of minimized pencil beams in the pb - redefinition algorithm ( shiu and hogstrom 1991 ) .the principle of pb redefinition was applied to heavy charged particles to address the effects of multiple collimators in the monochromatic pb approximation ( kanematsu 2008b ) .however , its rigorous application to a heterogeneous system requires polychromatic energy spectra , which would be computationally demanding for heavy charged particles with sharp bragg peaks. it will be thus difficult to cope with range compensation or patient heterogeneity in that approach .recently , kanematsu ( 2009 ) proposed an alternative approach , the pb - splitting algorithm , where monochromatic pencil beams dynamically split into smaller ones near a lateral density interface .automatically , fine pencil beams are densely arranged only where they are necessary while otherwise large pencil beams are sparsely arranged for efficient dose calculation . in conjunction with the grid - dose - spreading convolution ( kanematsu 2008a ) ,the pb - splitting algorithm demonstrated feasibility of accurate patient dose calculation while minimizing the impact of recursive beam multiplication ( kanematsu 2011 ) . in this study, we further extend the pb - splitting approach to beam - customization devices to deal with their physical structures accurately and efficiently and to complete a consistent algorithmic framework for dose calculation in treatment planning . in the following sections , we define the model elements that were mostly diverted from previous studies ,construct a novel and original beam - customization model , and examine its validity for a test - beam experiment .a beam source is defined as the best approximate point from which radiating particles will have the same fluence reduction with distance .the formulation differs among beam - spreading methods and often between the transverse and axes , _i.e. _ , at height for and for .the particles incoming to a point in the field , which is normally the isocenter , are projected back onto the and source planes to define rms source sizes and in the gaussian approximation .although a range - modulated beam should be ideally subdivided into energy components of different source heights and sizes , it is approximately represented by a single component of average behavior in this study .a beams - eye - view ( bev ) image is defined as an matrix of square - sized pixels starting at on the isocenter plane . for bev pixel , pixel position and the line connecting to the and sources are defined as following the thick - collimator model ( kanematsu 2006 ) , two identical apertures on the top and bottom faces are associated with every collimator , which are modeled as two - dimensional bitmaps .matrix describes aperture with elements of transmission 1 ( transmit ) or 0 ( block ) for ] .the pixel- position is given by where and are the first pixel position and the square pixel size of the bitmap image .for arbitrary point on the aperture plane , intersecting pixel is determined with the nearest integer function as and the distance to the nearest aperture edge , is quickly referenced from the distance map filled by the distance - transform algorithm ( borgefors 1986 ) .a rcf made of a tissue - like material of effective density is similarly described by an matrix of range shifts , first pixel position , and pixel size . in this study, we deal with a single rcf of a flat downstream face at height . the stopping and scattering effects of the rcfare approximated by a local interaction at the midpoint of the beam path in the structure ( gottschalk 1993 ) , _ i.e. _ , at height for rcf pixel .following the original pb - splitting algorithm ( kanematsu 2009 ) , the present pb model is based on the fermi - eyges theory ( eyges 1948 ) for stopping and scattering ( kanematsu 2009 , gottschalk 2010 ) excluding hard interactions that are implicitly included in the depth dose curve .a gaussian pencil beam is characterized by position , direction , number of particles , residual range , and phase - space variances of the projected angle and transverse displacement , which develop in a tissue - like medium by step as \delta s , \label{eq:7}\end{aligned}\ ] ] where is the stopping - power ratio of the medium to water ( kanematsu 2003 ) and and are the particle mass and charge in units of those of a proton . to limit excessive beam multiplication , pencil beams subjectto splitting should have sufficient particles , _i.e. _ , , where is the number for the original beam and is a cutoff .when a pencil beam of rms size spreads beyond the lateral density interface at distance from the beam center , it splits into daughter beams downsized by factor as where is a parameter that limits the fraction of overreaching particles . with respect to the mother beam ,daughter ( \\ 1 & for \quad , } \label{eq:18}\end{aligned}\ ] ] where is the transmission factor of beam , is the beam intersection pixel of aperture , and factor to rms projected angle secures three standard deviations for edge distance . for partial transmission , we calculate the geometrical acceptance of particles incoming to the pb origin . as shown in , with small orthogonal angles and about the pb axis , constituent - particle direction is defined as which translates into geometrical line , , or only particles passing through all the apertures can get to the pb origin to redefine the number of particles , the direction , and the mean square angle as where is the aperture pixel in which line intersects . in practice , these integrals are made numerically at sampling intervals for regions .the rcf shortens the residual range of the pencil beam by the thickness of the intersecting pixel as and increases the mean square angle by in before the beam is transported downstream .every pencil beam is individually transported by through downstream apertures . at an aperture , which is practically either the top or bottom face of an optional ptc ,pencil beams near the edge will be partially transmitted .incidentally , edge distance in naturally corresponds to density - interface distance in for pb splitting . at every downstream aperture , multiplicity appropriately determined while limiting overreaching particles to below 2% by setting . in the case of splitting ,daughter beams are defined according to and then individually transported downstream starting from the current aperture with possible recursive splitting in the same manner .the pencil beams that are finally out of the aperture will be blocked by setting , which addresses the partial - blocking effect of the collimator . cross - section view , ( b ) the beam s eye view on the isocenter plane , where the filled areas represent xjc , yjc , mlc , and ptc from upstream to downstream and the hatched area represents rcf.,width=10 ]as this study shares the objective of beam - customization modeling with the former study in the pb - redefinition approach ( kanematsu 2008b ) , we use the same experimental data , where a broad carbon - ion beam of residual range cm in water was customized with an xjc at height 117137 cm , a yjc at 96116 cm , and a partially effective mlc at 6983 cm , a 3-cm pmma half - plate rcf at 3538 cm , and an 8-cm - square ptc at 2227 cm as shown in .four lines of the in - air dose profiles on the isocenter plane were measured along the axis at cm and cm and along the axis at cm and cm .the 20%80% penumbra sizes ( ) were 0.58 cm for the xjc edge and 0.48 cm for the yjc edge , which translate into rms source sizes cm at cm and cm at cm .the tissue - air ratio for the 3-cm pmma ( ) was measured to be 0.951 . in the calculation , dose grids in a single layer were arranged on the isocenter plane at 1-mm intervals .the open field of uniform fluence ( ) was subdivided into the bev image pixels of size mm on the isocenter plane , to each of which a pencil beam was defined at the effective scattering point of the rcf .for every pencil beam , upstream collimation by the xjc , the yjc , and the mlc , range shift and scattering by the rcf , and beam transport including collimation and splitting by the ptc down to the isocenter plane were applied .the in - air dose distribution on the isocenter plane was calculated with where is the tissue - air ratio or the dose per fluence for beam . to verify the effectiveness of pb splitting for the ptc edge, we calculated dose distributions at heights 0 cm ( isocenter plane ) and 20 cm ( immediate downstream ) by relocating the dose grids and compared them with corresponding non - splitting calculations , for which we disabled splitting by setting .in the calculation , 40000 beams were originally defined at the rcf , 23912 of them passed through the upstream collimators , and 20444 of them passed through the ptc to end up with 36704 dose - contributing beams by splitting .the cpu time of a 2.4 ghz intel core 2 duo processor amounted to 1.30 s and 1.25 s for the calculations with and without pb splitting .shows the calculated dose distribution . the dip andbump along the axis are attributed to scattering by the pmma half plate .sharpness of the field edge was strongly correlated with the distance to the effective collimator .axis at cm ( a ) , cm ( b ) , along the axis at cm ( c ) , and cm ( d ) , where the solid lines are the calculations and the open circles are the measurements.,width=8 ] in the experiment , the uncertainty of the scanned detector positions was 0.1 mm and that of the collimator positions was mm according to the specifications .the latter may only shift the edge position and will not influence the penumbra size .the single - point dose uncertainty was evaluated to be 0.3% in repeated measurements , which is negligible for penumbra analysis .shows the calculated and measured doses profiles , where the measured doses are in fact the dose ratios of the customized field to the open field to compensate for the fluence non - uniformity .unexpectedly , the customized - field doses were higher than the open - field doses by a few percent . that may be attributed to the contribution of particles hard - scattered by the collimators , which was not considered in the present model . from these profiles ,the 20%80% penumbra sizes were obtained by reading 20% and 80% dose positions by linear interpolation of two sampling points , which brings dominant uncertainty amounting to a fraction of the sampling interval of 1 mm .the measured penumbra sizes were then corrected to quadratically exclude with effective dosimeter size mm for a 2-mm pinpoint chamber .summarizes the resultant penumbra sizes .these measurements and calculations agreed to a submillimeter level , which is consistent with the estimated uncertainty .l l l l l profiling & interested- & effective & + position & edge side & device(s ) & measurement & calculation + cm & left & xjc+rcf & 6.4 & 6.7 + cm & right & xjc & 5.8 & 5.8 + cm & left & mlc+rcf & 4.6 & 4.4 + cm & right & mlc & 3.7 & 3.2 + cm & lower & ptc+rcf & 2.3 & 2.6 + cm & upper & yjc+rcf & 5.6 & 5.7 + cm & lower & ptc & 1.4 & 1.3 + cm & upper & yjc & 4.8 & 4.5 + [ tab:1 ] the effectiveness of beam splitting for the ptc - edge sharpening is shown in , where panels ( c ) and ( d ) show enlarged views of panels ( c ) and ( d ) in with additional lines for non - splitting calculations .the pb splitting reasonably sharpened the field edges at the immediate downstream and made better agreement with the measurements on the isocenter plane .ironically , the contamination of collimator - scattered particles happened to compensate substantially for the lack of edge sharpening in the tail regions .it is one of the algorithmic novelties of this study to originate the pencil beams at the effective scattering points of the rcf regardless of upstream collimation .then , the lateral heterogeneity of the rcf is naturally irrelevant to the minimized pencil beams .upstream collimation is reasonably modeled as filtering of particles in the angular distribution to correct the phase - space parameters of the defined pencil beams .while the pb size and density are generally arbitrary in pb algorithms , small size and high density are required to represent sharp edges of downstream collimation . in the present model , the sharp ptc edge was naturally realized by splitting of the pencil beams . in the former study ( kanematsu 2008b ) , because the pencil beams could not be redefined as monochromatic after range compensation , they were only artificially downsized for edge sharpening .the downsizing strength was empirically determined to reproduce the 20%80% penumbra size on the isocenter plane while overlooking the other aspects .in fact , while the resultant penumbra sizes were equivalently good for both models , the dose profiles in the upstream was unphysically bouncy in the former study due to insufficient density , which could be clinically problematic . in the original pb - splitting algorithm ( kanematsu 2009 ), the overreaching condition was defined as the one - standard - deviation distance ( ) to a 10% density change .that was because its objective heterogeneity was moderate density variation among body tissues .this study deals with solid and precisely defined collimator edges , for which the distance to an aperture edge may be more appropriate . in the present example , the ptc was effective only for approximately 1/4 of the field edge .the pb splitting was limited to the pencil beams around the effective edge and actually increased the number of beams by 80% and the cpu time by 4% .this discrepancy is mainly attributed to computational overhead for generation , upstream collimation , and range compensation of the pencil beams .although we only dealt with the planer grids in this case , the pb splitting would not add severe computational load even for volumetric grids when used with the grid - dose - spreading convolution ( kanematsu 2011 ) . in heavy charged particle radiotherapy ,target doses are predominantly formed by bragg peaks of primary particles .hard - scattered particles are generally out of the scope of practical pb algorithms due to difficulty in their modeling .fortunately , the collimator - scattered particles tend to lose large energy in the collimator and thus naturally attenuate with depth ( van luijk 2001 ) .nevertheless , kimstrand ( 2008 ) included the collimator - scatter contribution in a convolution algorithm using monte - carlo - generated kernels .that approach may be valid and will further improve the accuracy if combined with the present model .we have developed a calculation model for customization of a broad beam of heavy charged particles based on the pb - splitting algorithm . in this model , a broad beam is decomposed into pencil beams of various size that is necessarily and sufficiently small to deal with structures of the beam - customization devices accurately and efficiently . also , placement of the pb origins at the effective scattering points in the rcf effectively reduced the relevant heterogeneity and greatly simplified the algorithm using only monochromatic pencil beams .the performance of the model was tested against existing experimental data , which demonstrated that the penumbra size for various collimator edges in a single field was accurate to a submillimeter level .this beam - customization part can be naturally combined with the patient - dose - calculation part that is similarly based on the pb - splitting algorithm ( kanematsu 2011 ) to complete an accurate and efficient dose calculation algorithm for treatment planning of heavy - charged - particle radiotherapy .coutrakon g , bauman m , lesyna d , miller d , nusbaum j , slater j , johanning , j and miranda , j and deluca jr p m and siebers j 1991 a prototype beam delivery system for the proton medical accelerator at loma linda _ med .phys . _ * 18 * 10939 | a broad - beam - delivery system for heavy - charged - particle radiotherapy often employs multiple collimators and a range - compensating filter , which potentially offer complex beam customization . in treatment planning , it is however difficult for a conventional pencil - beam algorithm to deal with these structures due to beam - size growth during transport . this study aims to resolve the problem with a novel computational model . the pencil beams are initially defined at the range compensating filter with angular - acceptance correction for the upstream collimators followed by the range compensation effects . they are individually transported with possible splitting near the downstream collimator edges to deal with its fine structure . the dose distribution for a carbon - ion beam was calculated and compared with existing experimental data . the penumbra sizes of various collimator edges agreed between them to a submillimeter level . this beam - customization model will complete an accurate and efficient dose - calculation algorithm for treatment planning with heavy charged particles . |
a very fruitful information on the dynamics can be gained from the study of periodic orbits .first , because these particular orbits are generically almost everywhere in phase space , and second because they can be computed easily , i.e. with some short integration time .these periodic orbits together with their stability organize locally the dynamics .it is then natural to consider them as a cornerstone of control strategies .for instance , in order to create invariant tori of hamiltonian systems , cary and hanson proposed a method based on the computation of an indicator of the linear stability of a set of periodic orbits , namely greene s residue .it provides an algorithm to find the appropriate values of some pre - defined parameters in order to reconstruct invariant tori by vanishing some selected residues .first developed for two - dimensional symplectic maps , it has been extended to four dimensional symplectic maps , and has been applied to stellarators ( where periodic orbits are closed magnetic field lines ) and particle accelerators . in this article, we review and extend this residue method .the aim is to tune appropriately the parameters of the system such that appropriate bifurcations occur .it is well - known in the literature that local bifurcations occur when the tangent map associated with the poincar map obtained by a transversal intersection of the flow , has an eigenvalue which is a root of the unity . in particular, periodic orbits can lose their stability in case of multiple eigenvalues on the unit circle , i.e. when these eigenvalues are equal to 1 or for two - dimensional symplectic maps .therefore it is natural to consider greene s residues as a way to locate those bifurcations . in this context ,vanishing residues indicate the specific values of the parameters where significant change occurs in the system and hence will be the basis for the reduction of chaos ( by creation of invariant tori ) as in refs . but also for the destruction of regular structures . in sec .[ sec2 ] , we review some basic notions on periodic orbits of hamiltonian systems and their stability , and we explain the details of the residue method .we give the condition on the residues of a pair of birkhoff periodic orbits to create an invariant torus in their vicinity , and a similar condition which leads to a destruction of nearby invariant tori . in sec .[ sec3 ] , we apply this method to the destruction and creation of librational and rotational invariant tori of a particular hamiltonian system , a forced pendulum with two interacting primary resonances , used as a paradigm for the transition to hamiltonian chaos .we consider an autonomous hamiltonian flow with two degrees of freedom which depends on a set of parameters denoted : where and , and being the two - dimensional identity matrix . in order to determine the periodic orbits of this flow and their linear stability properties , we also consider the tangent flow written as where and is the hessian matrix ( composed by second derivatives of with respect to its canonical variables ) . for a given periodic orbit with period ,the spectrum of the monodromy matrix gives its linear stability property .as the flow is volume preserving , the determinant of such a matrix is equal to 1 .moreover , if is an eigenvalue , so are , and . as the orbit is periodic , is an eigenvalue with an eigenvector in the direction of the flow .its associated eigenspace is at least of dimension 2 since there is another eigenvector with eigenvalue 1 coming from the conserved quantity .therefore , according to the remark above , the orbit is elliptic if the spectrum of is ( and stable , except at some particular values ) , or hyperbolic if the spectrum is with ( unstable ) .the intermediate case is when the spectrum is restricted to or and the orbit is called parabolic . whether or not the parabolic periodic orbit is stable depends on higher order terms . in a more concise form, the above cases can be summarized using greene s definition of a residue which led to a criterion on the existence of invariant tori : we notice that the 4 ( instead of 2 for 2d maps ) in the numerator comes from the two additional eigenvalues 1 coming from autonomous hamiltonian flows . if ,1[$ ] , the periodic orbit is elliptic ; if or it is hyperbolic ; and if and , it is parabolic and higher order expansions give the stability of such periodic orbits .since the periodic orbit and its stability depend on the set of parameters , the features of the dynamics will change with variations of the parameters .generically , periodic orbits and their linear stability are robust to small changes of parameters , except at specific values where bifurcations occur .the proposed residue method to control chaos detects these rare events to yield the appropriate values of the parameters leading to the prescribed behavior on the dynamics .the residue method which leads to a reduction or an enhancement of the chaotic properties of the system is based on the change of stability of periodic orbits upon a change of the parameters of the system . for ,let us consider two associated birkhoff periodic orbits ( i.e. periodic orbits having the same action but different angles in the integrable case and having the same rotation number on a selected poincar section ) , one elliptic and one hyperbolic .let us call and their residues .we have ( and smaller than one ) and .we slightly modify the parameters until the elliptic periodic orbits becomes parabolic .some particular situations arise at some critical value of the parameters : : .+ : while .+ : while .+ the first case is associated with the creation of an invariant torus .the two latter cases might be associated with the destruction of invariant tori ( the ones around the elliptic periodic orbit ) .the third one is associated with a period doubling bifurcation . in this latter case , the change of stability of the new elliptic periodic orbit has to be considered .other interesting cases occur depending on the set of selected periodic orbits .the situation resembles the integrable situation where all the residues of periodic orbits of constant action are zero .it is expected that an invariant torus is reconstructed in this case .it can be associated with a transcritical bifurcation ( an exchange of stability ) , a fold , or another type of bifurcation .in the situation , a change of stability occurs : the elliptic periodic orbit turns hyperbolic while the hyperbolic one stays hyperbolic .it is generically characterized by a stationary bifurcation . in this case , the destruction of invariant curves is expected in general whether there are librational ones ( representing the linear stability of an elliptic periodic orbit ) or the neighboring rotational ones .an extra caution has to be formulated since this method only provides an indicator of the _ linear _ stability of periodic orbits .the nonlinear stability ( or instability ) has to be checked a posteriori by a poincar section for instance .this method only states that a bifurcation has occurred in the system , whether it is a stationary , transcritical , period doubling or other types of bifurcations .a more rigorous and safer control method would require to consider the global bifurcations , like the ones obtained by the intersections of the stable and unstable manifolds of two hyperbolic periodic orbits in the spirit of ref .however such a control method would be computer - time consuming ( determination of the stable and unstable manifolds ) and hence not practical if some short time delay feedback is involved in the control process .we consider the following forced pendulum system with 1.5 degrees of freedom a poincar section of hamiltonian ( [ eqn : fp ] ) is depicted on fig .[ fig1 ] for and on fig .[ fig2 ] for . in order to modify the dynamics of hamiltonian ( [ eqn : fp ] ), we add an additional ( control ) parameter : we consider a family of hamiltonians of the form where is not too large in order to consider a small modification of the original system , and minimizing the energy cost needed to modify the dynamics .other choices of families of control terms are possible ( not restricted to .in particular , more suitable choices of control terms would include more fourier modes .we have selected a one - parameter family which originates from another control strategy which has been proved to be effective .the goal here is to determine the particular values of the parameter such that suitable modifications of the dynamics ( which will be specified later ) occur .the algorithm is as follows : first , we determine two periodic orbits of hamiltonian ( [ eqn : fp ] ) , an elliptic and a hyperbolic one with the same rotation number on the poincar section , using a multi - shooting newton - raphson method for flows . then we modify continuously the control parameter and follow these two periodic orbits .we compute their residues as function of . ) with .the arrows indicate the elliptic periodic orbits for the three cases considered here.,title="fig:",scaledwidth=30.0% ] ) with .the arrows indicate the elliptic periodic orbits for the three cases considered here.,title="fig:",scaledwidth=30.0% ] ) for .,title="fig:",scaledwidth=30.0% ] ) for .,title="fig:",scaledwidth=30.0% ] a first analysis is done on librational invariant tori ( around the primary resonance located around ) .we point out in fig .[ fig1 ] three particular elliptic periodic orbits ( and their associated hyperbolic ones ) , labeled 1,2 and 3 , with , respectively , , and intersections with the poincar section ( ) .these orbits will be used for two purposes : first we follow the idea of cary and hanson on the construction of invariant tori .then we extend the residue method to the destruction of these tori .a similar analysis is done on rotational invariant tori ( the example of the goldenmean invariant torus is treated ) .this case allows us to compare the residue method with another approach on the control of hamiltonian systems .briefly we determine the control parameter such that there is a creation of an invariant torus if the original system does not have one , and the destruction of an invariant torus if the system does have one .for the case , fig .[ fig3 ] represents the values of the residues of the elliptic and hyperbolic periodic orbits as functions of the control parameter [ see eq .( [ eqn : fp2 ] ) ] . at ,both residues vanish which means that they become parabolic periodic orbits as in the integrable case . by increasing , we notice that both orbits exchange their stability which is the manifestation of a transcritical bifurcation while each of the periodic orbits undergo individually a tangent bifurcation .this type of bifurcation has been observed in refs . . at ,an invariant torus is reconstructed . in order to check the robustness of the method, one could argue that since this invariant torus is composed of periodic orbits , it is not expected to be robust .however , by continuity in phase space , an infinite set of invariant tori is present in the neighborhood of the created invariant torus .most of them have a frequency which satisfy a diophantine condition and hence which will persist under suitable hypothesis on the type of perturbations .the locations of the different periodic points on the poincar section as varies are indicated by arrows on fig .the change of stability of these periodic points is associated with the creation of an invariant torus ( also represented in fig .[ fig4 ] by the plot of the separatrices ) .we notice that apart from the exchange of stability , the phase space in the neighborhood of these periodic orbits is still regular ( the chaotic region around the hyperbolic periodic orbits is not well developed ) , and hence the regular nature of phase space has not been changed locally ( or one needs to consider higher values of the parameters ) . of the elliptic and hyperbolic ( bold line ) periodic orbits of case 1 ( ) as functions of the parameter for hamiltonian ( [ eqn : fp2 ] ) with .,scaledwidth=30.0% ] ) and hamiltonian ( [ eqn : fp2 ] ) with .the trajectories in gray are for and the ones in black are for . at , an invariant torus of the systemis represented ( bold line ) .the arrows indicate the change of locations of the periodic points as increases.,scaledwidth=30.0% ] of the elliptic and hyperbolic periodic orbits of case 2 ( ) as functions of the parameter for hamiltonian ( [ eqn : fp2 ] ) with .,scaledwidth=30.0% ] the same analysis can be carried out on a set of periodic orbits which are located in a more chaotic region , like for instance the two cases and of fig .[ fig1 ] outside the regular resonant island .the values of the residues as functions of the parameter are respectively represented on figs .[ fig5 ] and [ fig6 ] for and . for , we notice that the residues do not vanish in the range of we considered although there are small and extremum at the same value of the parameter . however , even if these residues do not vanish ( and therefore no exchange of stability by the creation of an invariant torus ) , there is a significant regularization of the dynamics at this specific value of the parameter ( not shown here ) . for ,the residues vanish for ( see fig .[ fig6 ] ) and there is a transcritical bifurcation associated with the creation of a set of invariant tori like in figs . [ fig3 ] and [ fig4 ] .the associated phase space shows a significant increase of the size of the resonant island in this section , we address the destruction of a resonant island by breaking up librational invariant tori .we notice that on fig .[ fig5 ] , a bifurcation occurs at for the case 2 ( ) when the residue of the elliptic periodic orbit becomes equal to 1 .the neighborhood of this periodic orbit becomes a chaotic layer and the qualitative change in the dynamics is seen since the chaotic layer becomes thicker at this value of the parameter .however since this periodic orbit was initially ( at ) already in the outer chaotic region ( see fig .[ fig1 ] ) , the regularization is not drastic . in order to obtain a more significant change in the dynamics and a large chaotic zone , one needs to select a periodic orbit inside a regular region , like for instance the one with .a bifurcation occurs at where the residue of the elliptic periodic orbit crosses 1 ( see fig . [ fig6 ] ) .a poincar section for the latter case is depicted on fig .[ fig7 ] , and shows that a significantly large neighborhood has been destabilized by the control term .we notice that the ratio between the size of the control term and the one of the perturbation is equal to .we also notice that this last value of is larger than the one required for .as expected , one needs a larger amplitude to destabilize a region closer to a regular one .a more effective destabilization procedure can be obtained with the periodic orbit which is inside the regular region . however , as mentioned , the value necessary for this destabilization ( or for ) is too large ; hence we discard it because of our restriction on energy cost . of the elliptic and hyperbolic periodic orbits of case 3 ( ) as functions of the parameter for hamiltonian ( [ eqn : fp2 ] ) with .,scaledwidth=30.0% ] ) for and .,title="fig:",scaledwidth=30.0% ] ) for and .,title="fig:",scaledwidth=30.0% ] in this section , we apply the same approach on rotational invariant tori .it allows us to compare the results with the ones obtained by a control method proposed in refs .first , the idea is to look at the creation of a specific invariant torus .for instance , we select a torus which has been widely discussed in the literature ( see ref . and references therein ) , the goldenmean one , which has a frequency for hamiltonian ( [ eqn : fp2 ] ) .we choose , and we first notice that when , this hamiltonian does not have such an invariant torus ( since its critical value is ) .the purpose here is to find the value of the control parameter needed by the residue method to reconstruct this invariant torus ( such that hamiltonian ( [ eqn : fp2 ] ) has this invariant torus ) .the idea of doing this follows greene s residue criterion . by performing an appropriate change of stability on higher and higher order periodic orbits, the amplitude of the control term should be smaller and smaller . for ,the residues of the elliptic and hyperbolic periodic orbit vanish at , and for , at . for ,both residues vanish at and also at . at these values of the parameter ,the phase space is locally filled by invariant tori where it is also expected that the goldenmean invariant torus is present .we notice that the elliptic periodic orbit with ( the next one in greene s residue approach for the analysis of the golden mean torus ) is destabilized at .therefore , there is no elliptic periodic orbit with at and the analysis using the coupled elliptic / hyperbolic periodic orbits can not be carried out .however , by following the two ( initially hyperbolic ) periodic orbits with , we see that both residues vanish at .we compared these values of stabilization with the one given by a method of local control based on an appropriate modification of the potential to reconstruct a specific invariant torus .such method provides explicitly the shape ( and amplitude ) of possible control terms whereas the one used in this article has been guessed from these references . by appropriate truncation ( keeping the main fourier mode ) , this method provides where , as an approximate control term .therefore the amplitude is which is of the same order as the values obtained by zeroing the residues .however , we point out that smaller values are obtained by looking at higher periodic orbits .therefore , an efficient control strategy is to combine the advantages of both methods : first , the specific shape of the terms that have to be added to regularize the system is obtained using the method of ref .then the amplitudes of these terms are lowered using high order periodic orbits . by considering the control term used in this article, we expect that zeroing the residues of high period will not be feasible with just this term ( as it is the case for instance in fig .[ fig5 ] ) . a more suitable form of control terms would be constructed from an exact control term which is however , it should be noticed that a control term given by ref . is not always experimentally accessible .the idea is to use a projection of this control term onto a basis of accessible functions .this projected control term would give an idea of the type of control terms to be used for the residue method .we would like to stress that in the absence of elliptic islands an initial guess for the newton - raphson method is not straightforward from the inspection of the poincar section .in particular , it is not easy to select the appropriate hyperbolic periodic orbits which will lead to a significant change in the dynamics .however , once it has been located , the method can follow them by continuity in the same way as the elliptic ones since the newton - raphson method does not depend on the linear stability of these orbits .this makes the method more difficult ( although possible ) to handle for just hyperbolic periodic orbits . in this section , we consider hamiltonian ( [ eqn : fp2 ] ) with .we notice that for , hamiltonian ( [ eqn : fp2 ] ) does have the rotational goldenmean torus .the purpose is to find some small values of the parameter where this invariant torus is destroyed .we notice that this case is easier to find than in the previous section since it is well - known that any additional perturbation will end up by destroying an invariant torus generically . hereit means that there will be large intervals of parameters for which the torus is broken ( contrary to the case of the creation of invariant tori ) .however we will add an additional assumption that the parameters for which this invariant torus is destroyed has to be small compared with the perturbation .we also notice that the destruction of the golden mean invariant torus is first obtained for negative values of the control parameter ( see fig .[ fig8 ] ) .first we illustrate the method by considering specific elliptic and hyperbolic periodic orbits ( with winding ratio ) near the goldenmean torus which will show the changes of dynamics occurring as the parameter is varied .we notice that the behaviors described below are generic for all the neighboring periodic orbits .the residues of these periodic orbits as functions of the parameter are shown in fig .we notice that the elliptic periodic orbit changes its stability , i.e. becomes hyperbolic , at ( where its residue becomes equal to 1 ) .a close inspection of the poincar section shows on fig .[ fig9 ] that it undergoes a period doubling bifurcation into an elliptic periodic orbit with 26 intersections on the poincar section ( and winding ratio ) which has a residue zero at the bifurcation . by following the residue of this elliptic periodic orbit ( depicted by a dashed line in fig .[ fig8 ] ) we see that it vanishes for . at this value of the parameter and for higher value in amplitude , all the periodic orbits considered here ( the two with and the one with ) are hyperbolic .therefore it is expected that there is a chaotic zone in this area and it is a value at which the torus is expected to be broken ( confirmed by a close inspection of the poincar section ) . of the elliptic and hyperbolic periodic orbits with and also of the one with ( dashed line ) born out of a period doubling bifurcation for hamiltonian ( [ eqn : fp2 ] ) with .,scaledwidth=30.0% ] ( 10,5 ) ( 0,0 ) ( indicated with crosses ) for hamiltonian ( [ eqn : fp2 ] ) for and . the period orbit with period 26 ( indicated by circles ) results from a period doubling bifurcation of the one with period 13 ( represented by crosses on the poincar section).,title="fig:",scaledwidth=30.0% ] ( 1.5,0.7 ) ( indicated with crosses ) for hamiltonian ( [ eqn : fp2 ] ) for and .the period orbit with period 26 ( indicated by circles ) results from a period doubling bifurcation of the one with period 13 ( represented by crosses on the poincar section).,title="fig:",scaledwidth=17.0% ] it is important to notice that a vanishing residue does not automatically imply that there is a creation of an invariant torus , contrary to the previous cases which were obtained by using jointly the elliptic and hyperbolic periodic orbits ( and vanishing residues in both cases ) . herethe hyperbolic periodic orbit associated with these elliptic periodic orbits ( which is the periodic orbit from which the new elliptic orbit was born out by a period doubling bifurcation ) stays hyperbolic as the residue of the elliptic one vanishes .this feature is generic : the same analysis has been carried out for higher order elliptic periodic orbits close to the goldenmean invariant torus , i.e. the ones with winding ratio , , , : first , the values of the control parameter for which the residues ( which are around for and increase as decreases ) cross 1 are computed and reported in table i ( denoted ) . at these values of the parameters , a period doubling bifurcation occurs for each of them .then we follow the residues of the elliptic periodic orbits with double period , , and .the parameter values at which these residues vanish are also reported in table i. for instance , using the periodic orbit with winding ratio , we obtain as the value at which the residue of the bifurcated elliptic periodic orbit with winding ratio .if we consider higher order periodic orbits , it happens that the goldenmean invariant torus is destroyed by this additional perturbation but not the ones in the neighborhood .if one is looking at large scale transport properties , these other invariant tori have to be taken into account ..values of the parameter at which the residue of the elliptic periodic orbit with period crosses 1 ( denoted ) and at which the residue of the elliptic periodic orbit with period obtained by period doubling bifurcation at vanishes ( denoted ) . [ cols="^,^,^,^,^,^",options="header " , ]in this article , we reviewed and extended a method of control of hamiltonian systems based on linear stability analysis of periodic orbits .we have shown that by varying the parameters such that the residues of selected periodic orbits cross 0 or 1 , some important bifurcations happen in the system. these bifurcations can lead to the creation or the destruction of invariant tori , depending on the situation at hand .therefore we have proposed a possible extension of the residue method to the case of increasing chaos locally .moreover , we have compared two methods of chaos reduction , and by taking advantage of both methods , we have devised a more effective control strategy .it is worth noticing that the extension of cary - hanson s method to four dimensional symplectic maps has been done in refs . for the increase of dynamic aperture in accelerator lattices .the extension to the destruction of invariant surface would be to consider the change of linear stability of selected periodic orbits .however , it would require to consider new types of bifurcations which occurs in the system , like for instance , krein collisions . | a method to reduce or enhance chaos in hamiltonian flows with two degrees of freedom is discussed . this method is based on finding a suitable perturbation of the system such that the stability of a set of periodic orbits changes ( local bifurcations ) . depending on the values of the residues , reflecting their linear stability properties , a set of invariant tori is destroyed or created in the neighborhood of the chosen periodic orbits . an application on a paradigmatic system , a forced pendulum , illustrates the method . * changing the dynamical properties of a system is central to the design and performance of advanced devices based on many interacting particles . for instance , in particle accelerators , the aim is to find the appropriate magnetic elements to obtain an optimal aperture in order to increase the luminosity of the beam , thus requiring the decrease of the size of chaotic regions . in plasma physics , the situation is slightly more complex : inside a fusion device ( like a tokamak or a stellarator ) , one needs magnetic surfaces in order to increase confinement . these surfaces are invariant tori of some fictitious time dynamics . a control strategy would be to recreate such magnetic surfaces by an appropriate modification of the apparatus ( magnetic perturbation caused by a set of external coils ) . on the opposite , in order to collect energy and to protect the wall components , an external modification of the magnetic equilibrium has to be performed such that there is a highly chaotic layer at the border ( like an ergodic divertor ) . therefore these devices require a specific monitoring of the volume of bounded magnetic field lines . another example is afforded by chaotic advection in hydrodynamics : in the long run to achieve high mixing in microfluidics and microchannel devices in particular , the presence of regular region prevents such mixing , and hence a possible way to enhance mixing is to perturb externally the system according to some theoretical prescriptions , in order to destroy invariant surfaces . * |
since their seminal introduction by , reaction - diffusion systems ( rds s ) have constituted a standard framework for the mathematical modelling of pattern formation in chemistry and biology .recent advances in mathematical modelling and developmental biology identify the important role of _ domain evolution _ as central in the formation of patterns , both empirically and computationally . in this respect ,many numerical studies , such as and , of rds s on evolving domains are available . yet, fundamental mathematical questions such as existence and regularity of solutions of rds s on evolving domains remains an important open question .we focus on growth functions commonly encountered in the field of developmental biology for which our analysis is valid and show the applicability of our analysis to some of the important reaction kinetics encountered in the theory of biological pattern formation . in [ s6 ] we present numerical results for a rds posed on a periodically evolving domain .we present a moving finite element scheme and a fixed domain finite element scheme to approximate the solution of a rds posed on the evolving and the lagrangian frame respectively . in [ s7 ]we summarise our findings and indicate future research directions .let be a ( ) vector of concentrations of chemical species , with , the time - dependent spatial variable and , \t>0, ] and } ] which grows to ^ 2 ] into a partition of n uniform subintervals , and denote by the time step . for the spatial discretisationwe introduce a regular triangulation of with an open simplex .we define the following shorthand for a function of time , .we define the finite element space on the initial domain as , where denotes the space of polynomials no higher than degree 1 . for the numerical simulation of equation ( [ eqn : schnak_moving ] ) we require finite element spaces defined on the evolving domain .we construct the finite element spaces according to the following relation between the basis functions of and . thus the family of finite element spaces on the evolving domain may be defined as , where we have used the fact that the domain evolution is linear with respect to space .we approximate the initial conditions in both schemes by where is the standard lagrange interpolant . the finite element scheme to approximate the solution to equation ( [ eqn : schnak_moving ] ) aims to find such that for all .similarly the finite element scheme to approximate the solution to equation ( [ eqn : schnak_fixed ] ) aims to find such that for all .we solved the models in c utilising the fem library by .we used the conjugate gradient solver to compute our discrete solutions .we took an initial triangulation with 8321 nodes , a uniform mesh diameter of and a fixed timestep of .paraview was used to display our results .startsection subsection20 mm - 0pt * * results figures [ fig : discrete_schnak_moving ] and [ fig : discrete_schnak_fixed ] show snapshots of the activator profile corresponding to the _ activator - depleted _ system ( [ eqn : schnak ] ) .the inhibitor profiles have been omitted as they are out of phase to the activator profiles .we have verified numerically that there is very little difference between the discrete solution corresponding to system ( [ eqn : discrete_schnak_moving ] ) mapped to the fixed domain and the discrete solution corresponding to system ( [ eqn : discrete_schnak_fixed ] ) defined on a fixed domain , as is expected from the results in [ lagtran ] .the figures illustrate the mode doubling phenomena that occurs as the domain grows as well as the spot annihilation and spot merging phenomena that occurs as the domain contracts .we note that the mode transition sequence , i.e. , the number of spots , is different when the domain grows to when it contracts .the difference in the mechanism of mode transitions on growing and contracting domains is an area in which very little work has been done and these initial numerical results indicate the need for further exploration of this area .many problems in biology and biomedicine involve growth . in developmental biologyrecent advances in experimental data collection allow experimentalists to capture the emergence of pattern structure formation during growth development of the organism or species .such experiments include the formation of spot patterns on the surface of the eel , patterns emerging on the surface of the japanese flounder and butterfly wing patterns forming during the growth development of the imaginal wing disc . in all these examples , patterns form during growth development . since the seminal paper by which considered linear models that could give rise to spatiotemporal solutions on fixed domains due the process of diffusion - driven instability , a lot of theoretical results on global existence of such solutions have been derived and proved for highly nonlinear mathematical models .only recently , mathematical models on growing domains have been derived from first principles in order to incorporate the effects of domain evolution into the models . in all these studies , very little analysis has been done up to now to extend the theoretical global existence results to models defined on evolving domains . under suitable assumptions ,we have extended existence results from problems posed on fixed domains to problems posed on an evolving domain .we have illustrated the applicability of the existence results of to problems on evolving domains .we have shown that global existence of solutions to many commonly encountered rds s on fixed domains implies global existence of solutions to the same rds s on a class of evolving domains .the results are significant in the theory of pattern formation especially in fields such as developmental biology where problems posed on evolving domains are commonly encountered .our results hold with no assumptions on the sign of the growth rate , which may prove useful in other fields where monotonic domain growth is not valid from a modelling perspective .the applicability of our results is demonstrated by considering different forms of domain evolution ( linear , logistic and exponential ) . in order to validate our theoretical findings , we presented results on a periodically evolving domain .our results illustrate the well - known period - doubling phenomenon during domain growth but more interesting and surprising is the development of spot annihilation and spot merging phenomena during contraction .this raises new questions about bifurcation analysis on growing and contracting domains .one of our primary goals is the numerical analysis of finite element approximations of rds s on evolving domains. the classical existence results obtained will be an important tool in future work .numerical experiments have been carried out and they illustrate the need for further numerical analysis especially in the case of contracting domains .extension of our work onto domains with more complex evolution is another area for future research .the research of c.venkataraman is partially supported by an epsrc doctoral training grant and a university of sussex graduate teaching assistantship .30 [ 1]#1 [ 1]#1 urlstyle [ 1]doi # 1 | we present global existence results for solutions of reaction - diffusion systems on evolving domains . global existence results for a class of reaction - diffusion systems on fixed domains are extended to the same systems posed on spatially linear isotropically evolving domains . the results hold without any assumptions on the sign of the growth rate . the analysis is valid for many systems that commonly arise in the theory of pattern formation . we present numerical results illustrating our theoretical findings . |
sampling is an efficient methodology that can be used for functional approximations .it is based on interpolation through a set of discrete points selected from a function within an interval ] and to otherwise . as we can see from this figure , at smaller the approximated function oscillates near the constant at the top of the curve stronger ( blue curve ) . however , as increases the oscillation rapidly decreases ( red curve ) and practically vanishes at .this signifies that if the change in curve between any two adjacent sampling points is relatively small , then any function can be approximated by sampling with the gaussian function . in this workwe introduce an application of the complex error function to the fourier analysis .in particular , we show that the use of equation in the fourier integration leads to a weighted sum of the complex error functions . due to remarkable property of the complex error functionthis approach provides efficient computational methodology in the fourier transform as a damping harmonic series .the complex error function , also known as the faddeeva function or the kramp function , is defined as where is the complex argument .this function is a solution of the following differential equation the complex error function finds broad applications in many fields of applied mathematics , physics and astronomy . in applied mathematicsit is closely related to the error function of complex argument \leftrightarrow { \rm{erf}}\left ( z \right ) = 1 - { e^ { - { z^2}}}w\left ( { iz } \right),\ ] ] the normal distribution function , \end{aligned}\ ] ] the fresnel integral /2 \end{aligned}\ ] ] and the dawsons integral in physics and astronomy the complex error function is related to the voigt function that describes the spectral behavior of the photon emitting or absorbing objects ( photo - luminescent materials , planetary atmosphere , celestial bodies and so on ) .specifically , the voigt function represents the real part of the complex error function , \quad\quad y \ge 0.\ ] ] other functions that can be expressed in terms of the complex error function are the plasma dispersion function , the gordeyevs integral , the rocket flight function and the probability integral .the complex error function can be represented alternatively as ( see equation ( 3 ) in and , see also appendix a in for derivation ) using the change of the variable as in the integral above leads to further , we will use this equation in derivation of the weighted sum .a rapid c / c++ implementation ( roofit package from cenrs library ) for computation of the complex error function with average accuracy has been reported in the recent work .there are several definitions for the fourier transform . in this workwe will use the following definitions and thus , the relationships between functions and are performed by two operators and corresponding to the forward and inverse fourier transforms , respectively . in signal processing the arguments and in these reciprocally fourier transformable functions and are interpreted , accordingly , as time vs. frequency .we can find an approximation to the fourier transform of the function by substituting approximation into equation .this leads to taking into account that the equation can be rewritten as since change of the variable leads to the equation can be rearranged as comparing this equation with equation yields , defining the constants we can rewrite approximation in a more compact form as a weighted sum }. \end{aligned}\ ] ] the equation can be expressed through any functions that have been considered in the section above .for example , using the identity after trivial rearrangements we get the fourier transform in terms of error functions of complex argument \right .\\ & \left .\qquad \ , + { \alpha _ { - n}}{e^ { - { { \left ( { \pi c\nu - inh / c } \right)}^2}}}\left [ { 1 + { \rm{erf}}\left ( { nh / c + i\pi c\nu } \right ) } \right ] \right\ } . \end{aligned}\ ] ] it can also be shown that the approach based on a weighted sum of the complex error functions can be generalized to the laplace transform .the derivation of approximation for the inverse fourier transform is straightforward now .comparing integrals , and using approximation we immediately obtain } , \end{aligned}\ ] ] where the coefficients are calculated as .since the gaussian function rapidly decreases with increasing , only few terms with negative index actually contribute to shape the curve along the positive -axis where the fourier integration takes place according to equation . as a result, it is sufficient to take into consideration only , say , first three terms }^2}}}/\left ( { c\sqrt \pi } \right) ] and }^2}}}/\left ( { c\sqrt \pi } \right) ] . beyond this region the functions and are equal to zero .therefore , the effective length of these wavelets is . since according to approximation we applied sampling points , the step between two adjacent sampling pointscan be determined from the formula .thus , by choosing and we can find the corresponding steps to be and , respectively . as the function is odd, its fourier transform is purely imaginary according to equation .figure 5 depicts and ] are obtained numerically by using equations and at , .the fourier transforms for the even and odd parts of the function can be found analytically . in particular , substituting the equations , into approximations , and considering the fact that these wavelets are not zero - valued only at $ ] we get and therefore , it is convenient to define the differences by using these functions and - \left\ { - 2h{e^ { - { { \left ( { \pi c\nu } \right)}^2}}}\sum\limits_{n = 1}^n { f^-\left ( { nh } \right)\sin \left ( { 2\pi \nu nh } \right ) } \right\}\\ & = \frac{{\pi \nu \cos \left ( { \pi \nu } \right ) - \sin \left ( { \pi \nu } \right)}}{{{\pi ^2}{\nu ^2 } } } - \left\ { - 2h{e^ { - { { \left ( { \pi c\nu } \right)}^2}}}\sum\limits_{n = 1}^n { f^-\left ( { nh } \right)\sin \left ( { 2\pi \nu nh } \right ) } \right\}. \end{aligned}\ ] ] figure 6 illustrates the differences ( blue curve ) and ( red curve ) computed at , . as we can see from fig . 6 , the differences and are within the range . further increase of the integer significantly improves the accuracy .this can be seen from fig .7 showing that the differences ( blue curve ) and ( red curve ) computed at , remain within the narrow range .we present a new approach for numerical computation of the fourier integrals based on a sampling with the gaussian function of kind . it is shown that the fourier transform can be expressed as a weighted sum of the complex error functions . applying a remarkable property of the complex error function ,the weighted sum of the complex error functions can be significantly simplified as a damping harmonic series .unlike the conventional discrete fourier transform , this methodology results in a non - periodic wavelet approximation .therefore , the proposed approach may be practically convenient and advantageous in algorithmic implementation .this work is supported by national research council canada , thoth technology inc . and york university .the authors wish to thank to prof .ian mcdade and dr .brian solheim for discussions and constructive suggestions .abrarov and b.m .quine , sampling by incomplete cosine expansion of the sinc function : application to the voigt / complex error function , appl .comput . , 258 ( 2015 ) 425 - 435 .http://dx.doi.org/10.1016/j.amc.2015.01.072 b.m . quine and j.r .drummond , genspect : a line - by - line code with selectable interpolation error tolerance j. quant .transfer 74 ( 2002 ) 147 - 165 .http://dx.doi.org/10.1016/s0022-4073(01)00193-5 b.m . quine and s.m .abrarov , application of the spectrally integrated voigt function to line - by - line radiative transfer modelling .transfer , 127 ( 2013 ) 37 - 48 .http://dx.doi.org/10.1016/j.jqsrt.2013.04.020 a. berk , voigt equivalent widths and spectral - bin single - line transmittances : exact expansions and the modtran5 implementation , j. quant .transfer , 118 ( 2013 ) 102 - 120 .h. borchert , d. v. talapin , n. gaponik , c. mcginley , s. adam , a. lobo , t. mller and h. weller , relations between the photoluminescence efficiency of cdte nanocrystals and their surface properties revealed by synchrotron xps , j. phys .b , 107 ( 36 ) ( 2003 ) 9662 - 9668 .http://dx.doi.org/10.1021/jp0352884 s.j .mckenna , a method of computing the complex probability function and other related functions over the whole complex plane , astrophys .space sci ., 107 ( 1 ) ( 1984 ) 71 - 83 .http://dx.doi.org/10.1007/bf00649615 s.m .abrarov and b.m .quine , master - slave algorithm for highly accurate and rapid computation of the voigt / complex error function , j. math .research , 6 ( 2 ) ( 2014 ) 104 - 119 . | in this paper we show that a methodology based on a sampling with the gaussian function of kind , where and are some constants , leads to the fourier transform that can be represented as a weighted sum of the complex error functions . due to remarkable property of the complex error function , the fourier transform based on the weighted sum can be significantly simplified and expressed in terms of a damping harmonic series . in contrast to the conventional discrete fourier transform , this methodology results in a non - periodic wavelet approximation . consequently , the proposed approach may be useful and convenient in algorithmic implementation . + * keywords : * complex error function , faddeeva function , fourier transform , sampling , gaussian function , numerical integration + |
peer review is the fundamental process used by the scientific community to ensure the quality of academic publications ( cf .e.g. , ) .several generations of scientists have contributed high - quality reviews , while only authorship has been credited for academic career .it is not easy to rationalize why researchers provide impartial reviews and constructive advice voluntarily , as they need to sacrifice time that could be used for their own research activities . in consequence , it is a puzzle how the system of peer review can be sustainable at all .this puzzle can be described as a double social dilemma game where scientists can choose levels of efforts for both manuscripts and reviews .given the presence of costs in terms of time and effort , no contribution ( sloppy review ) is the best reply strategy for reviews .when all scientists play according to their best reply strategy , the resulting outcome is no scientific control on the quality of submitted papers .the dominant strategy equilibrium of low quality reviews mean that submissions need not be of high quality . due to the immense costs of producing high quality work , authors are best off by submitting poor manuscripts . in short , in the lack of explicit sanctions and incentives , low - quality submissions and low - quality reviews from the dominant strategy equilibrium in the social dilemma of scientific production .although social dilemmas of this kind are difficult to resolve in general , certain theoretical solutions have already been proposed that can be applied to the context of peer review .the most evident improvement might come from shifting the payoffs in favor of cooperation .for instance , the reward for overall cooperation can be introduced by attaching higher importance on scientific quality through the introduction or increased emphasis on journal metrics , such as impact factor .another straightforward solution is the application of selective incentives : allocating additional benefits for authors of high - quality papers ( e.g. , promotion and grants , which is current practice ) and for reviewers writing high - quality reports ( which is rare in current practice ) .direct and indirect reciprocity are potential solutions to social dilemmas , if the chance of repetition is high and the opportunities for retaliation are in foresight . in the context of peer review ,indirect reciprocity can be facilitated , for instance , by rotating the roles of authors , reviewers , and editors .the social embeddedness of scientific production further enhances the chance of solving the social dilemma efficiently .the small world aspect of working in specific fields , the intertwined network of co - authorship , participation in international project consortia , and hangouts at conferences all reduce the competitiveness and improve the pro - social character of reviewing .furthermore , once informal communication , gossip , reputation , image scoring , and stratification enters into the structure of the social dilemma , cooperation might emerge and be maintained . recording , keeping , and relying on reputational scores has become an efficient guideline for cooperation in many areas of life .earlier achievement and reputational information certainly plays an important role in current practice both for editorial and reviewer decisions .social incentives that are associated with direct and indirect reciprocity and the structural embeddedness of peer review might be key mechanisms that rationalize cooperative behavior of reviewers .the importance of social incentives is highlighted also by surveys asking for the motivations of reviewers .in fact , monetary incentives might be in conflict with or drive out social incentives when applied to reviewers .these solutions for social dilemmas might help us to understand how peer review can work and be sustained .once the fundamental mechanisms are studied rigorously , they can also lead to policy recommendations on improvements of the current system and the design of new solutions ( cf . ) .the rest of this paper is divided in three sections . in section [ themodel ], we introduce our agent - based model of scientific work and peer review .we report simulations results in section [ results ] and we conclude in section [ conclusion ] .we model the production of scientific work and its peer review by a simple agent - based model . in this paper, we build on the view that the aim of peer review is to ensure scientific production and evaluation .the model emphasizes the costly character of high - quality manuscript submissions and reviewer contributions .we take it as granted that high - quality submissions lead to better science , but reviews impact scientific quality only indirectly .our model makes radical simplifications on the practical aspects of the peer review system intentionally .this way , we aim at providing a straightforward assessment of the institutional conditions and editorial policies under which authors are motivated to produce high - quality submissions and reviewers are motivated to provide high quality reviews . hence , we are primarily interested in the emergence of cooperation that results in scientific quality .the model contains scientists as agents who write single - author papers . at each discrete period , each scientist performs the task of an author - by producing one article - and the task of a reviewer .authors and reviewers comprise of an identical set of agents . for the sake of simplicity, we consider a single journal with a single editor , who is not an author or a reviewer in the journal .authors can produce low or high quality contributions that they submit to the editor .the editor selects ( set to two in our simulations ) reviewers for each paper choosen uniformly at random with an upper limit of reviews for each reviewer ( set to four for the sake of our simulations ) .we assume that reviewers always accept requests , but they do not necessarily invest high effort in performing reviews .it is of their strategic decision to produce a low quality review at low cost ( normalized to zero ) or a high quality review at high cost .the former is the best reply strategy of the reviewer if no further incentives are provided .the review is translated into a binary recommendation ( accept or reject ) which is passed on to the editor .we assume that the recommendation is random with fifty - fifty percent chance of accept or reject in case of low reviewer effort and reflects the quality of the submission perfectly in case of high reviewer effort .the editor s decision is based on incoming recommendations and it has a binary outcome : accept or reject .we consider a single round of review .acceptance benefits the author , but benefits the editor ( the journal and scientific development in general ) only if the submission was of high quality .the incentive structure and the strategy space of the game are defined as follows .the editor wants to maximize high - quality scientific output in the journal and would like to minimize the number of low - quality articles appearing in the journal .this means that similarly to the model of , papers that are accepted and of bad quality are the most harmful for the journal .these values are used as output measures to evaluate the performance of the entire system in our simulations .the situation is of asymmetric information , in which the editor is unable to assess the true quality of submissions or the true effort of reviewers .the true quality of accepted papers is revealed probabilistically after publication ( that we will fix to in the simulations reported ) and there is no way for the editor to assess the true quality of rejected submissions .the crucial parts of editorial policy are therefore the selection of reviewers who can be trusted for their recommendations and the extent of reliance on reviewer recommendations .several strategies could possibly be used in order to arrive at proper conclusion .we vary these editorial strategies in between simulations , because these could be easily translated into policy recommendations . keeping a reputational account of scientistsis part of all such strategies .editorial reputations of scientists are improved largely by high quality publications ( named ) , degraded even more by low quality publications ( ) , worsened by rejected submissions ( ) , improved by good reviews ( ) and worsened by bad reviews ( ) .good reviews are those where the paper is revealed of the same quality of what the reviewer said .bad reviews are those where there is a difference , i.e. where the paper is revealed as bad while the reviewer said that it was good or where the paper was revealed as good , but the reviewer recommended rejection ( in case of conflicting reviews ) . in summary ,the editorial reputation of scientist is given as : where ( the sum of bad and good rejected papers ) and .please note the asymmetric character of reviewer bias : as the high quality of rejected papers is never revealed , rejection is a safer strategy for reviewers ( a rejection recommendation can only turn out to be a bad review in case other reviewers recommended publication , the paper has been published , and its true high quality has been revealed ) .parameter represents the relative detrimental effect for the journal reputation of accepting a low quality paper .incoming reviews are assumed to lead to editorial conclusions according to an editorial policy which is assumed to be fixed and not updated within a simulation .we manipulate editorial strategies , between the simulations in order to compare the effectiveness of these policies .editorial strategies differ with regard to reviewer selection , the handling of conflicting reviews , and relying on author reputations in desk rejection and acceptance .the last element will only be added to the extended model .we consider four editorial strategies with regard to reviewer selection and handling of conflicting reviews .for all of them , if all referees agree , then the editor follows the unanimous advice . in case of _ disagreement _ , the editor can use one of the following strategies : * * ap * : reject ; * * 1p * : accept ; * * er * : follow the advice of one of the referees chosen at random probability proportional to the relative editorial reputations of the referees ; * * mr * : follow the advice of the most reputed referee .authors have perfect information about the quality of their own submissions . similarly to the model of , authors decide to submit a paper of low quality ( at low cost ) or at high quality ( at high cost ) .they are best off with the publication of their low - quality papers .thus , in terms of obtained payoffs : . for the simulations we assume that all agents receive a unit of endowment ( research time ) in each period , which they lose by investing high effort in writing a high - quality paper ( ) and high - quality reviews ( , where ) .we assume no gain from not being published ( ) .being published yields a positive return .the numerical payoffs in our simulations are : when in the role of reviewers , scientists first accept editorial requests to review submissions .we assume that this does not entail any significant costs : the time spent on pushing the `` accept '' button in an editorial managerial system is negligible and the social costs of being committed to reviewing are counterbalanced by gaining access to papers before their publication . once papers are assigned to reviewers , they decide to invest low effort ( no loss from the endowment of valuable time ) or high effort in performing the task .a high reviewing effort implies a deduction from the endowment that is proportional to the ratio between number of good reviews and the number of assigned reviews .low effort is the best reply strategy of reviewers .altogether , writing high quality papers as well as writing high quality reviews entails sacrificing valuable research time for scientists . for the sake of simplicity, we assume that endowments are not transferable between the two kind of activities . in this way , unlike , we disregard the potential time conflict between writing high - quality papers and reviews . in their model , scientists need to allocate time between submissions and reviewing . in our model ,both activities are costly if they are performed well . in line with the current duality of practices , two casesare compared : single blind and double blind reviews . in case of single blind reviews , reviewers can condition their recommendations on the reputation of the author . in case of double blind reviews ,only the editor is able to make decisions based on the reputation of authors . in case of a single blind review system , the reviewer strategy can be conditional on the public reputation of the author .unlike the editorial reputation of authors , the public reputation of an author depends only on _ published _ papers and does not severely punishes bad publications : as reviewers are also authors , author and reviewer strategies are bundled and are characterized by the following elements : * a decision for the production of manuscripts can be either : * * : produce manuscripts of high quality ( cooperation ) * * : produce manuscripts of low quality ( defection ) * a decision for the production of reviews that can be either : * * : produce reviews at high effort ( cooperation ) * * : produce reviews at low effort ( defection ) * * : exercise high effort with a likelihood that is proportional to the public reputation of the author of the paper ( cooperation conditional on reputation ) .individual strategies are assigned to scientists at the start randomly and in equal numbers for each combination .this results in four bundled strategies for the double blind case and in six bundled strategies for the single blind case .authors decide according to their strategy types to submit a paper of low quality or at high quality ( at high cost ) , which is sent out for review according to the rules determined by the editorial policy .reviewers act according to their strategies and provide recommendations of accept or reject to the editor .papers are selected for publication as a result of the recommendations of the reviewers and the editorial policy .the evolution of author / reviewer strategies is modeled with a replicator dynamics rule adjusted to a finite population .scientists adopt strategies that ensured higher average payoffs in the previous time period in the population , while disregard strategies that resulted in lower payoffs .specifically , at a given time each individual has a probability of being selected for updating his strategy .if this happens , then his new strategy is selected randomly with a probability proportional to the difference of the given strategy to the average expected payoff , weighted for the current strategy frequencies .formally , the expected new population size for strategy is given by : where is the average payoff of strategy , while is the average payoff in the whole population and is the speed of evolution .we constrain the finite replicator dynamics process such that all strategies have an integer number of representations in the population and no .note that the strategies of agents evolve and not the agents are replaced .this means that agents accumulate editorial and public reputation throughout the simulation . in order to keep the model simple and to concentrate on some key mechanisms , other detailsare fixed to some natural values and some possible elements can be activated upon choice , which we list below .[ [ limit - to - the - number - of - publications . ] ] limit to the number of publications .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the number of publications is limited to a fixed proportion of submissions .if the referee process produces too many accepted papers given the current editorial policy of the editor , then all accepted contributions are ranked according to editorial reputation of the authors and only the first papers are published . throughout the paper , for simulations reported , we consider a medium level of competition with . [ [ journal - impact - factor . ] ] journal impact factor .+ + + + + + + + + + + + + + + + + + + + + + in recognition of the fact that publishing in a reputed journal produces a payoff that depends also on the quality of past published papers , agents who publish a paper may receive an increase to their payoff equal to : at each given time step , the public good benefit of journal impact factor ( jif ) is given to all authors who get their papers published , irrespective of quality .the higher the proportion of high - quality papers , the higher the jif is .note that the introduction of jif increases payoffs for publications produced at high effort , but increases free rider rewards for those who are able to publish low - quality work to the same extent .we assume that the journal impact factor is a public good for those who published with a linear production function , where the increment describes the public value of a single scientific contribution .the lack of a journal impact factor and a linear public good with describes a situation that is worse than the prisoner s dilemma : even full cooperation does not compensate for the entailed costs ( ) .considering the journal impact factor and , translates the game into a true linear public good game , which is still extremely difficult to solve . for the sake of the simulations of this paper we assume .[ [ desk - rejections - and - speeding - up - publication . ] ] desk - rejections and speeding up publication . + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + with increased time - pressure and burden of reviewers , it is common practice that not all submissions are sent out for review .some submissions are desk - rejected and others have an easy , speedy route for publication .editorial decisions behind are very much based on the reputation of authors .we implement desk - rejection and desk - acceptance as an additive feature compared to the baseline model .desk - rejection and acceptance introduces a bias in favour of individuals with higher reputation and in damage of individuals with lower editorial reputation compared to the baseline model .essentially , we introduce the possibility for the editor to desk - reject or accept submissions proportional to the relative editorial reputation of the author , regardless of recommendations by referees .please note that as producing low or high - quality reviews is also part of editorial reputations , in this way , reviewers receive some compensation for their low or high reviewing efforts .the rule added to editorial decisions is as follows : * compute the minimal editorial reputation , the maximal editorial reputation and the median editorial reputation of agents . *if , then with probability the paper is desk rejected , and with probability , it is sent to the referees . * if , then with probability : the paper is desk accepted , and with probability , it is sent out for review .we first demonstrate that in the baseline model in which scientists face costs for producing high - quality manuscripts and costs for producing high - quality reviews , there is no chance of any cooperation .this is not surprising , because low effort in writing papers as well as low effort in writing reviews is the dominant strategy in the baseline game .figures [ baseline_doubleblind ] , [ baseline_singleblind ] , and [ stats_baseline ] report that this is the case both for double blind and for single blind peer review systems considering a neutral editorial policy ( ap ) . in all cases ,the strategy implying the production of low quality papers and reviews ( dd ) overtakes the entire population . to examine the failure of the scientific peer review process more closely , consider that high - quality review does not return any benefits in any case , therefore every reviewer is better off by choosing d. in a population with only dd and cd strategies , dd yields higher average payoffs since , if people review randomly , there is a 50% chance of getting a low - quality paper published , which is exactly the same for high - quality submissions . as cd strategies do not benefit anything from peer review , but they entail higher costs for the author , they die out . without any feedback loop that would help to ensure the production of scientific quality , science ends up as an empty exercise . ; papers are assigned in random order to randomly chosen referees ; each referee can review up to 4 papers and a maximum of 30% of the papers are publishable in each period.,title="fig:",scaledwidth=24.0% ] ; papers are assigned in random order to randomly chosen referees ; each referee can review up to 4 papers and a maximum of 30% of the papers are publishable in each period.,title="fig:",scaledwidth=24.0% ] ; papers are assigned in random order to randomly chosen referees ; each referee can review up to 4 papers and a maximum of 30% of the papers are publishable in each period.,title="fig:",scaledwidth=24.0% ] ; papers are assigned in random order to randomly chosen referees ; each referee can review up to 4 papers and a maximum of 30% of the papers are publishable in each period.,title="fig:",scaledwidth=24.0% ] ; papers are assigned in random order to randomly chosen referees ; each referee can review up to 4 papers and a maximum of 30% of the papers are publishable in each period.,title="fig:",scaledwidth=24.0% ] ; papers are assigned in random order to randomly chosen referees ; each referee can review up to 4 papers and a maximum of 30% of the papers are publishable in each period.,title="fig:",scaledwidth=24.0% ] ; papers are assigned in random order to randomly chosen referees ; each referee can review up to 4 papers and a maximum of 30% of the papers are publishable in each period.,title="fig:",scaledwidth=24.0% ] ; papers are assigned in random order to randomly chosen referees ; each referee can review up to 4 papers and a maximum of 30% of the papers are publishable in each period.,title="fig:",scaledwidth=25.0% ] [ [ baseline - with - reputation - weighted - consideration - of - reviews . ] ] baseline with reputation - weighted consideration of reviews .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + figure [ baseline_doubleblind ] and figure [ baseline_singleblind ] contain also cases in which the editorial policy takes into account the editorial reputations of reviewers and attach higher weights to the opinion of higher reputed referees .these are editorial policies mr and er . the strategy dd still gains overwhelming dominance under these policies .this is because such policies do not ensure the occurrence of correlated equilibria : the larger weight to opinions does not mean at all that these opinions would favor high - quality contributions more than others .low effort in reviewing is a dominant strategy for every author .the opinion of those who started with cooperation receive higher weights , but they still underscore defectors with regard to payoffs .the initial population that is equally divided among different types of strategies goes through a quick and progressive elimination of cooperative strategies . in this process , having a relatively bad reputation does not matter for payoffs as the chances of publishing a paper become equivalent for good and bad papers over time . [ [ baseline - with - the - entry - of - reputational - concerns . ] ] baseline with the entry of reputational concerns .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + somewhat surprisingly , the editorial policy that ensures a low , but stable level of cooperation is 1p .this is an editorial policy , which accepts all papers , if there is at least one positive recommendation from the reviewers .seemingly , this policy is neutral to reputation .but in fact , this is not the case . as there is a constraint on how many papers can be published, the friendly 1p editorial policy generates the largest surplus of acceptable manuscripts . in case of a surplus, the editor ranks the papers based on the editorial reputation of the authors .hence , a direct feedback on reputation exists , which is sufficient to guarantee in some simulation runs a stochastic mixed strategy equilibrium with the survival of cooperation .as we demonstrated , accounting on reputation of reviewers for making judgment on manuscripts is insufficient to trigger the production of high - quality reviews and high - quality papers .we should see if a more direct consideration of editorial reputation leads to higher efforts and as a result to better science .note that the model extension in this direction is related intentionally to popular discussions whether editors should be unbiased or they could rely on reputation signals of authors from the past .should they catalyze the matthew effect in science , in which the successful get even more success ?should they contribute to the maintenance of the old - boyism bias ? if they do , does it hurt or help scientific development ?let us first introduce an editorial bias in favour of authors with high and against authors with low one .we assume that authors with an editorial reputation higher than the median has a chance of desk acceptance that increases linearly with their reputation ( equation [ deskrejection ] ) .similarly , we assume that authors with an editorial reputation lower than the median has a chance of desk rejection that is in negative linear association with their reputation ( equation [ deskacceptance ] ). this modification does not rule out peer review , but concentrates its decisive character to the middle range , where no clear reputational judgment can be expected from the editor .results with this extension show no major breakthrough for cooperation : dd dominates the outcomes ( figure [ reputationeditorialalone ] ) . either with double blind or single blind peer review , all agents become of dd type .this indicates that a direct editorial bias in desk acceptance and rejection in itself is insufficient to trigger a large extent of cooperation .this kind of editorial bias , however , is able to support the survival of conditional cooperation of reviewers ( figure [ stats_reputation ] ) . when a strict editorial acceptance policy is applied ( ap ), the lack of publishable material leads to the need of selecting submissions based on reputation . the high effort in producing scientific material , however , does not pay off because of the difficulties of acceptance .scientists therefore follow the easier path of gaining higher reputation and might place high effort in reviewing others .reviewing efforts are profitable when they are most likely to provide reputation benefits .the public reputation helps the referees to get the best out of their reputation - based conditional strategy : when the public reputation of an author is high , then it is more likely that his paper gets published , and therefore it is more likely that a review of high quality will ensure positive returns in terms of editorial reputation . as a result ,the cooperative reviewer strategy that is conditional on the reputation of the author might survive in case of single blind peer review .this happens because the editor might provide a differential treatment for individuals with higher reputations earned strictly by high - quality reviews . at the opposite ,when the author s reputation is low , then reviewers with a strategy conditional on author reputation do not bother and follow the cheap strategy of providing random advice . in this case , their payoff is not different from agents who never put high effort in reviewing .the introduction of journal impact factor implies that a public good bonus is added to the payoff of each agent publishing a paper .the size of the public good is proportional to the performance of the journal in terms of good papers published .large public good benefits in the presence of some reputational motives allow for strategies producing high quality papers to survive and disseminate ( figure [ jif_alone ] ) .furthermore , the analysis of the population evolution shows that when jif is active , most strategies producing low - quality papers disappear from the population .this means that if a journal publishes high - quality papers , it ensures that submissions are also of high quality .this is good news given the fact that the public good reward of jif as a payoff supplement does not erase the social dilemma structure of the game .defection is still the best reply strategy both for authors and for reviewers .still , cooperation evolves ; thanks to the editorial account of author reputations and to the large initial share of cooperative strategies that survive the early phase of the simulation .full cooperation is among those who disappear relatively late ( figure [ jif_alone ] ) , which assists the dominance of the high - effort - in - writing and low - effort - in - reviewing strategy . as a consequence ,the rise of good papers at the end is not accompanied by good reviews ( figure [ stats_jif ] ) .still , the scientific development is maintained at the best and results in the highest possible jif .this means that only high quality papers are published .peer review just adds a random noise for the publication process and it is meaningless anyway because everyone contributes with high - quality submissions . [ [ journal - impact - factor - together - with - strong - reputation - concerns . ] ] journal impact factor together with strong reputation concerns .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + public good rewards that supplement the original payoff structure largely improve the opportunities for scientific development and lead to the overall success of the cd strategy .we have also seen that under certain editorial policies that take account of reputation , a low level of full cooperation ( cc ) can be sustained also without the public good reward .when we introduced desk rejections and acceptance based on reputation to the baseline , then some agents gained reputation successfully with a conditional reviewing strategy drep .it is therefore interesting to observe which strategies are successful if both jif and strong reputation concerns are accounted for .the results show that the strongest determinant of the evolution is the journal impact factor ( figures [ jifandreputation ] and [ stats_jifandreputation ] ) . when it matters , even under strong reputation concerns , high - effort publication strategies gain dominance with low - effort reviews .this is very much meaningful once there is a reward for reputation and the most reputational gains can be obtained by high - quality publications . ; papers are assigned in random order to randomly chosen referees ; each referee can review up to 4 papers and a maximum of 30% of the papers are publishable in each period.,title="fig:",width=188 ] ; papers are assigned in random order to randomly chosen referees ; each referee can review up to 4 papers and a maximum of 30% of the papers are publishable in each period.,title="fig:",width=188 ] ) .,title="fig:",width=188 ] ) .,title="fig:",width=188 ] ) .,title="fig:",width=188 ] ) .,title="fig:",width=188 ] ) .,title="fig:",width=188 ] ) .,title="fig:",width=188 ] [ [ tacit - agreements ] ] tacit agreements + + + + + + + + + + + + + + + + to complete the story , we still miss a mechanism that makes writing as well as reviewing papers plausible and sustainable without radically altering the payoff structure of the game . a realistic possibility is to consider `` tacit agreements '' that work against the impartiality of peer review in practice .tacit agreements are based on direct reciprocity and might work better in case of single blind reviews , in which at least one side of the informational asymmetry is relaxed .a `` nice '' tacit agreement strategy could start with high effort in the first rounds , recognizes previous reviewers of own papers with probability and retaliates them not with high effort / low effort , but with acceptance / rejection recommendation .once there is a coordination device that brings high quality submissions in the hand of highly reputed reviewers , then acceptance recommendations would match true quality . in this case , the editorial selection policy will matter , because this would create the possibility of low cost reciprocation for high performing scientists .this is in line with the conclusion of similar empirical and simulation work .many believe that this is the game that is played by the top researchers in top journals .it is important to note , however , the possible drawbacks of self - emerged network - based practices , including old - boyism , partiality , and a conservative bias .it is puzzling why scientists devote considerable time and effort for writing reviews that decreases their time spent on their own research .once everyone acts according to self - interest , reviews are all of low quality and they can not be adequately used to judge scientific quality . as a consequence ,scientists submit low quality work in the hope of passing through randomly judging reviewers .we have labeled this puzzle as the miracle of peer review and scientific development .we investigated some potential mechanisms that might resolve this puzzle by agent based modeling .we have modeled scientific production accordingly : with an incentive structure in which low efforts in writing papers as well as in writing reviews is the dominant strategy of agents .we applied a replicator dynamics rule to the population of scientists , allowing for the reproduction of strategies that result in higher payoffs .not surprisingly , low effort in writing papers as well as in writing reviews have spread in the population and scientific practice has become an empty exercise in our baseline model .next , we assumed that editors might rely on the reputations of authors in their choices . in our model with a single journal , the editor took perfect account of high- and low - quality publications of authors , the number of their rejected papers , and if their reviewer recommendations were in line with the true quality of the paper or not .we examined different editorial policies that took account of the reputations of scientists .we showed that if reputations are used in reviewer selection , then it does not save science from low quality submissions and low quality reviews .a bit more surprisingly , easing the route for publication by desk acceptance for highly reputed authors alone has not changed anything either .all this indicated that the emergence of cooperation in the form of high efforts is an extremely difficult puzzle .some cooperation has resulted from a friendly editorial policy that categorized submissions as publishable if at least one reviewer recommended publication .this policy led to an oversupply of publishable material , which called for the ranking of submissions based on author reputations .this direct feedback has made the investment in reputations profitable .consequently , high - effort author strategies survived in a mixed equilibrium together with low - effort author strategies , but nobody invested effort in reviewing . also when a strict editorial acceptance policy was applied , in which only papers with unanimous reviewer support are published , some cooperation has emerged . in this case , however , the lack of publishable material was responsible for the worth of reputation .as the investment in reputation via writing papers was risky due to the difficulties of publishing , scientists profited more from the investment in reputation via reviews . a strategy that conditioned high reviewing efforts on the author s reputation was able to gain a notable share in the population .reputations worked to some extent , but public good benefits worked clearly better for scientific development .once we introduced the journal impact factor as a public good benefit , which meant the distribution of an additional payoff for all authors who published in the journal ( either a good or a bad paper ) , cooperation has become the most successful strategy of authors . in this case , editorial reputations became correlated with actual contributions to the provision of the pubic good .but as the production of high quality papers was still much more important for reputations than high quality reviews , the cooperative strategy that emerged as successful was investing high effort only in manuscript writing and not in reviewing . as a consequence, cooperation has been observed in scientific production , but peer review has just added random noise to this development , which raises doubts of its use and concerns about the use of public money . at the end , we were successful in demonstrating in a simple model the puzzling motivational problem of peer review .we highlighted that it is not easy to find the way out of this puzzle .we showed that a high - value of the public good of science maintains scientific development .reputational systems that are heavily building on author contributions might be partially sucessful , especially if the reputational hierarchy is directly used for selecting between similarly rated submissions . these mechanisms, however , will not help to sustain the efficiency of peer review .paradoxically , mechanisms that are able to induce some level of high - quality reviews are building on reciprocity , and in practice they are often associated with impartiality , old - boyism , the emergence of invisible colleges , the matthew effect , the conservative bias , and the stratification of science .we plan to extend our simple model towards studying multiple journals that compete for success with each other .this extension allows for the evolution of editorial strategies in a straightforward way and in parallel to theoretical studies that highlight how group selection can ensure higher cooperation , is expected to lead to better reviewer performance .barrera , d. and buskens , v. 2009 `` third - party effects on trust in an embedded investment game . '' in cook , k. , snijders , c. , buskens , v. and cheshire ( eds ) , trust and reputation , new york , russell sage , 37 - 72 .chetty , r. , saez , e. , and sndor , l. `` what policies increase prosocial behavior ?an experiment with referees at the journal of public economics . ''journal of economic perspectives , ( 2014 ) 28(3 ) : 169 - 188 .milinski , manfred , dirk semmann , and h. krambeck .`` donors to charity gain in both indirect reciprocity and political reputation . '' proceedings of the royal society of london b : biological sciences 269.1494 ( 2002 ) : 881 - 883 .milinski , manfred , et al .`` cooperation through indirect reciprocity : image scoring or standing strategy ? '' proceedings of the royal society of london b : biological sciences 268.1484 ( 2001 ) : 2495 - 2501 .sobkowicz , pawel .`` innovation suppression and clique evolution in peer - review - based , competitive research funding systems : an agent - based model . ''journal of artificial societies and social simulation 18.2 ( 2015 ) : 13 .sommerfeld , ralf d. , hans - juergen krambeck , and manfred milinski .`` multiple gossip statements and their effect on reputation and trustworthiness . ''proceedings of the royal society of london b : biological sciences 275.1650 ( 2008 ) : 2529 - 2536 . | it is not easy to rationalize how peer review , as the current grassroots of science , can work based on voluntary contributions of reviewers . there is no rationale to write impartial and thorough evaluations . consequently , there is no risk in submitting low - quality work by authors . as a result , scientists face a social dilemma : if everyone acts according to his or her own self - interest , low scientific quality is produced . still , in practice , reviewers as well as authors invest high effort in reviews and submissions . we examine how the increased relevance of public good benefits ( journal impact factor ) , the editorial policy of handling incoming reviews , and the acceptance decisions that take into account reputational information can help the evolution of high - quality contributions from authors . high effort from the side of reviewers is problematic even if authors cooperate : reviewers are still best off by producing low - quality reviews , which does not hinder scientific development , just adds random noise and unnecessary costs to it . we show with agent - based simulations that tacit agreements between authors that are based on reciprocity might decrease these costs , but does not result in superior scientific quality . our study underlines why certain self - emerged current practices , such as the increased importance of journal metrics , the reputation - based selection of reviewers , and the reputation bias in acceptance work efficiently for scientific development . our results find no answers , however , how the system of peer review with impartial and thorough evaluations could be sustainable jointly with rapid scientific development . _ keywords : peer review ; evolution of cooperation ; reputation ; agent based model _ |
by applying the physical - layer network coding or analogue network coding ( anc ) in two - way relaying ( twr ) , only two time slots are required for one complete information exchange using twr . in the first time slot ,both source nodes transmit simultaneously to the relay . in the second time slot ,the relay broadcasts the common message which is obtained by combining the received messages . since both source nodes know their own transmitted signals , each of their self - interference can be completely canceled prior to decoding .the twr has been studied in to for the case of single source pair .the beamformer design for af - based multiple - input - multiple - output ( mimo ) twr is studied in , in which the receive and transmit beamforming are derived separately and then combined to form the relay beamformer .furthermore , some strategies to enhance the performance of twr can be found in to and references therein .the non - anc - based twr for multiple source pairs is studied in to .unlike the single source pair case , in multiple source pairs scenario , additional inter - pair interference exists between different source pairs , which degrades the twr performance . in ,a relay network with multiple source pairs and multiple relay nodes is studied , where all sources and relay stations have multiple antennas .the multiuser twr is proposed and studied in , where multiple source pairs are communicating via multiple relays . in , the mimo twr where multiple wireless node pairs are communicating via a single decode - and - forward ( df ) relayis studied . due tothe presence of inter - pair interference , previous beamforming solutions for twr with single source pair is no longer useful and new solutions are required for the case of multiple source pairs . in almost all the above works to , the inter - pair interference are canceled using zero - forcing ( zf)-based approach . in this paper ,two new non - zf - based beamforming schemes or beamformers are proposed for anc - based twr . instead of completely canceling the inter - pair interference for all source pairs by zf - based methods as was done in previous works ,we propose joint grouping and beamforming scheme that divides a given large number of source pairs into smaller subgroups , and then apply the proposed beamformers to each subgroup . to the best of our knowledge , this approach has not been studied in previous works .simulation results are presented to compare the performance of the proposed schemes .we study wireless twr with a multi - antenna relay and a total number of single - antenna sources , where is an even number . due to the fact that the inter - pair interference contains the desired signals of all the other sources ,these desired signals are also suppressed by any suboptimal beamformer which causes significant loss in the sinr , especially when is large .we propose to overcome that this shortcoming by first dividing a large number of source pairs into subgroups , each with a smaller number of source pairs , where is an even number .then , by using time division approach , the relay performs non - zf - based beamforming on each subgroup of users at one time , and take turn to serve all source pairs , to achieve a better throughput performance .next , we consider a given subgroup of sources , and derive twr beamformers for these sources . without loss of generality , we assume the -th source node , , is to exchange information with another source , where , .each -th source have single antenna , whereas the relay station r is equipped with antennas .let the vectors and denote , respectively , the channel response matrices from to r , and that from to r. we assume that the elements of and follow the distribution of circularly symmetric complex gaussian with zero mean and unity variance , which is denoted as .let and denote respectively , the transmitted symbols from to , and from to .we assume the optimal gaussian codebook is used at each , and therefore s are independent random variable each is distributed as .assume tdd is used and two time slots are needed for information exchange using analogue network coding ( anc) , . in the first time slot , all active source nodes transmit their signals simultaneously , the received baseband signal vector at r is given by , where is symbol index , is the transmit power at , is the received additive noise vector , and without loss of generality , it is assume that follows the distribution of , where is an identity matrix . throughout this paper, we assume that the s are given or fixed . at the relay , we consider af relaying using linear beamforming which is represented by a matrix .the transmit signal at r can be expressed in terms of its inputs as .we assume channel reciprocity for uplink and downlink transmission through the relay . in the second time slot , when is transmitted from r ,the channels from r to become , .the total transmit power at r , denoted as , can be shown as , where denotes the trace of .the anc is adopted as follows .we assume that using training and estimation , and are perfectly known at , prior to signal transmission .each of the can first cancel its self - interference and then coherently demodulate for .this yields for .we assume that the received noise is distributed as , and are independent of . , and is distributed as . at each , coherent signal detection can then be used to recover from .the signal - to - interference - plus - noise ratio ( sinr ) for the -th destination node . , can be expressed as +we define the uplink ( ul ) channel gain matrix ] for .we have the following new result . _proposition : _ the optimal beamforming matrix to achieve maximum sinr in ( [ eq : sinrk ] ) has the following structure : where is a matrix ._ proof : _ the above new result can be proven by extending the proof of , which considers the twr for the case of single source pair who are exchanging information . for the case of multiple source pairs ,each source is also subject to interference from the other source pairs .this so - called inter - pair interference term also depends only on as , which also spans the total signal subspace of .therefore , we obtain .the beamforming matrix can be solved as shown next .+ let , , represent the effective channel from to r , with given in ( [ eq : optimalbaf ] ) .similarly , let represent the effective channel from r to .the sinr formula in ( [ eq : sinrk ] ) can be written in terms of . throughout this paper ,the optimization problems are formulated in terms of the beamforming matrix and the effective channels . the minimum power ( mp ) beamformeris derived by minimizing the total relay transmit power with respect to the relay beamforming matrix ( or equivalently , ) , subject to sinr constraints , , for given transmit powers , , we can use the same approach as in , to develop an efficient algorithm based on the second - order cone programming ( socp ) to solve the problem in .we derive the suboptimal minimum interference ( mi ) beamformer that does not require computations via optimization technique .the tradeoff between the desired signals and interference is taken into account by minimizing the sum of inter - pair interference plus af noise at the output of the beamformer , subject to the constraints that the desired signal gain for each -th receiver is equal to a constant .the additive gaussian noise at each -th source , which is not affected by the beamforming matrix , and is neglected in the optimization . again in this case , the joint design of both receive and transmit beamforming is considered .the interference minimization problem is formulated as , where the above constraints in ( [ eq : minimizeinterference ] ) are introduced to preserve the desired signal components at each source , so as to minimize the inter - pair interference plus af noise component that point into any undesired direction . in ( [ eq : interference ] ) , represents the sum of inter - pair interference ( from other source pairs ) plus af noise that is imposed on . to solve for , the ul and dl channel response of each -th sourceis assumed to be known at the relay .we assume that the s are given , and write , , and , where , and denotes a equivalent beamforming weights vector which is generated by the rule of ( [ eqn : vector ] ) as , .\ ] ] the problem ( [ eq : minimizeinterference ] ) can be written in terms of as , we further define , , where both , and are matrices . by writing , , and , the problem ( [ eq : minimizeinterference2 ] )can be written as , where ] contains the k scalar constraints values , which are chosen to satisfy the sinr requirements of each source .the beamforming vector solution , denoted as , that satisfies ( [ eq : lcmv ] ) is solved as , the mi relay beamformer , denoted as , can then be obtained from as , where denotes the inverse operation of defined in .it can be shown that the solution maximizes the sinr of each source , where the constant is used to control the total relay power in ( [ eq : relay power ] ) .the iterative steps to search for are presented next .we propose the following joint grouping and beamforming scheme that divides a given large number of source pairs into smaller subgroups , and then apply the above beamformers to each subgroup . for simplicity ,here we consider that the grouping is done arbitrarily . by doingso we can reduce the feedback of channel state information and beamforming calculation during each relaying .* divide the total number of source pairs into subgroups , , and let denote the number of source in each subgroup . * for each -th subgroup , , compute the mi beamforming matrix as follows : * * given * ] .* * initialize * , . *compute the svd of the ul channel matrix .* estimate the correlation matrix .* compute the constraint matrix and the response vector . *compute the beamforming weights using . * obtain the mi relay beamforming matrix by arranging the beamforming weights of into the matrix form , . ** repeat * * * set . ** update by the bisection method : if is less than a given power constraint , set ; otherwise , . * * until * , where the small positive constant is chosen to ensure sufficient accuracy . * compute the mi relay beamforming matrix as for each -th subgroup , . *select the number of subgroup and its corresponding which results in the largest achievable sum - rate .in this section , the sum rate performance of the proposed twr beamformers are presented . for simplicity ,the channel correlations between the -th and the -th sources are set to be equal , that is , , . in fig .[ fig : rate comp no cor ] , two pairs of single - antenna source ( ) and a multi - antenna relay ( ) are considered in the simulations .the transmit power at each source , , and are fixed as , and the relay transmit power is set to be . both sources within each pairare set to have identical sinr requirement . with no channel correlation between different sources ,the proposed mi beamformer achieves the same achievable rate region as the optimal mp beamforming scheme .however , when the channel correlations between different source pairs , and within each source pair , are set to be higher , the proposed mi beamformer achieves a smaller rate region ( not shown here due to space constraint ) . this is because whenthe channels of different source pairs are correlated , the desired signal components that point in the direction of the inter - pair interference will also be suppressed by the mi beamformer , which in turn reduces the achievable rate for each source . , , fixed , , and . ] , , . ] in fig .[ fig : sum - rate no cor ] , we present the achievable sum - rate versus snr at the relay , which is defined as , for the proposed mi beamformer with various number of subgroups .four pairs of single - antenna source ( ) and a multi - antenna relay ( ) are considered , and the channels between different sources are uncorrelated .time division is used to serve different subgroup .we observe that either the smaller subgroups or large group transmission does not always perform the best in twr . with low snr at the relay ,the case of four groups ( each with one source pair ) performs the best , whereas for large snr , the case of a single group ( with four source pairs ) performs the best .this is because with either large number of source pairs or small relay transmit power , the mi beamformer tends to suppress the interference more for the case of single group ( with more interfering users ) , which results in sinr loss . to overcome this shortcoming ,joint grouping and beamforming scheme can be used to reduce sinr loss as follows . for small snr, we should apply the proposed beamformers to four different subgroups , and for large snr , we should apply the proposed beamformers to a single group ( with four source pairs ) . the improved sum - rate performance by joint grouping and mi beamforming scheme is highlighted by the solid line in fig .[ fig : sum - rate no cor ] .the optimal grouping and selection of the snr thresholds correspond to different number of subgroups are interesting subjects for further study .new optimal and suboptimal beamformers for anc - based twr with multiple source pairs are derived by taking into account the tradeoff between the desired signals and the inter - pair interference . for low snr, a better sum - rate performance can be achieved by first diving a large number of source pairs into smaller subgroups , and then apply beamforming to each subgroup using time division .this research is partly supported by the singapore university technology and design ( grant no .srg - epd-2010 - 005 ) .s. katti , s. gollakota , and d. katabi , embracing wireless interference : analog network coding " , _ computer science and artificial intelligence laboratory technical report _ , mit - csail - tr-2007 - 012 , feb .23 , 2007 .i. hammerstr m , m. kuhn , c. esli , j. zhao , a. wittneben , and g. bauch , mimo two - way relaying with transmit csi at the relay " , in _ proc .ieee signal proc . adv .wireless comm .( spaw ) _ , jun .t. unger and a. klein , linear transceive filters for relay stations with multiple antennas in the two - way relay channel " , _16th ist mobile and wireless communications summit _ , budapest , hungary , jul .2007 .r. f. wyrembelski , t. j. oechtering , i. bjelakovi ., c. schnurr , and h. boche , capacity of gaussian mimo bidirectional broadcast channels " , in _ proc .inf . theory ( isit ) _ , jul .2008 .r. zhang , y. c. liang , c. c. chai , and s. cui , optimal beamforming for two - way multi - antenna relay channel with analogue network coding " , _ ieee journal on selected areas in communication _5 , october 2009 , pp .3988 - 3999 .l. song , y. li , h. peng , b. jiao , and a. v. vasilakos , differential modulation for bidirectional relaying with analog network coding " , _ ieee transactions on signal processing _58 , no . 7 , pp .3933 - 3938 , jun . 2010 .l. song , g. hong , b. jiao , and m. debbah , joint relay selection and analog network coding using differential modulation in two - way relay channels " , _ ieee transactions on vehicular technology _ ,59 , no . 6 , jul .t. abe , h. shi , t. asai and h. yoshino , relay techniques for mimo wireless networks with multiple source and destination pairs " , in _ eurasip journal on wireless communications and networking _ , pp. 1 - 9 , 2006 . | we study amplified - and - forward ( af)-based two - way relaying ( twr ) with multiple source pairs , which are exchanging information through the relay . each source has single antenna and the relay has multi - antenna . the optimal beamforming matrix structure that achieves maximum signal - to - interference - plus - noise ratio ( sinr ) for twr with multiple source pairs is derived . we then present two new non - zero - forcing based beamforming schemes for twr , which take into consideration the tradeoff between preserving the desired signals and suppressing inter - pair interference between different source pairs . joint grouping and beamforming scheme is proposed to achieve a better signal - to - interference - plus - noise ratio ( sinr ) when the total number of source pairs is large and the signal - to - noise ratio ( snr ) at the relay is low . analogue network coding ( anc ) , two - way relaying ( twr ) , multiple source pairs , information exchange , analogue relaying , optimal beamforming . + [ section ] [ section ] [ section ] [ section ] [ section ] |
the solution of inverse problems provides a rich source of applications of the bayesian nonparametric methodology .it encompasses a broad range of applications from partial differential equations ( pdes ) , where there is a well - developed theory of classical , non - statistical , regularization . on the other hand ,the area of nonparametric bayesian statistical estimation and in particular the problem of posterior consistency has attracted a lot of interest in recent years ; see for instance . despite this, the formulation of many of these pde inverse problems using the bayesian approach is in its infancy .furthermore , the development of a theory of bayesian posterior consistency , analogous to the theory for classical regularization , is under - developed with the primary contribution being the recent paper .this recent paper provides a roadmap for what is to be expected regarding bayesian posterior consistency , but is limited in terms of applicability by the assumption of simultaneous diagonalizability of the three linear operators required to define bayesian inversion . our aim in this paper is to make a significant step in the theory of bayesian posterior consistency for linear inverse problems by developing a methodology which sidesteps the need for simultaneous diagonalizability .the central idea underlying the analysis is to work with precision operators rather than covariance operators , and thereby to enable use of powerful tools from pde theory to facilitate the analysis .let be a separable hilbert space , with norm and inner product , and let be a known self - adjoint and positive - definite linear operator with bounded inverse .we consider the inverse problem to find from , where is a noisy observation of .we assume the model , where is an additive noise .we will be particularly interested in the small noise limit where .a popular method in the deterministic approach to inverse problems is the generalized tikhonov - phillips regularization method in which is approximated by the minimizer of a regularized least squares functional : define the tikhonov - phillips functional where are bounded , possibly compact , self - adjoint positive - definite linear operators .the parameter is called the regularization parameter , and in the classical non - probabilistic approach the general practice is to choose it as an appropriate function of the noise size , which shrinks to zero as , in order to recover the unknown parameter . in this paperwe adopt a bayesian approach for the solution of problem ( [ eq : int1 ] ) , which will be linked to the minimization of via the posterior mean .we assume that the prior distribution is gaussian , , where and is a self - adjoint , positive - definite , trace class , linear operator on .we also assume that the noise is gaussian , , where is a self - adjoint positive - definite , bounded , but not necessarily trace class , linear operator ; this allows us to include the case of white observational noise .we assume that the , generally unbounded , operators and have been maximally extended to self - adjoint positive - definite operators on appropriate domains .the unknown parameter and the noise are considered to be independent , thus the conditional distribution of the observation given the unknown parameter ( termed the likelihood ) is also gaussian with distribution define and let in finite dimensions the probability density of the posterior distribution , that is , the distribution of the unknown given the observation , with respect to the lebesgue measure is proportional to this suggests that , in the infinite - dimensional setting , the posterior is gaussian , where we can identify the posterior covariance and mean by the equations and by completing the square .we present a method of justifying these expressions in section [ sec : justification ] .we define and observe that the dependence of on and is only through . since the posterior mean also depends only on : .this is not the case for the posterior covariance , since it depends on and separately : . in the following ,we suppress the dependence of the posterior covariance on and and we denote it by .observe that the posterior mean is the minimizer of the functional , hence also of that is , the posterior mean is the tikhonov - phillips regularized approximate solution of problem ( [ eq : int1 ] ) , for the functional with . in and ,formulae for the posterior covariance and mean are identified in the infinite - dimensional setting , which avoid using any of the inverses of the prior , posterior or noise covariance operators .they obtain which are consistent with formulae ( [ eq : int4 ] ) and ( [ eq : int7 ] ) for the finite - dimensional case . in thisis done only for of trace class while in the case of white observational noise was included .we will work in an infinite - dimensional setting where the formulae ( [ eq : int4 ] ) , ( [ eq : int7 ] ) for the posterior covariance and mean can be justified .working with the unbounded operator opens the possibility of using tools of analysis , and also numerical analysis , familiar from the theory of partial differential equations . in our analysiswe always assume that is regularizing , that is , we assume that dominates in the sense that it induces stronger norms than this is a reasonable assumption since otherwise we would have ( here is used loosely to indicate two operators which induce equivalent norms ; we will make this notion precise in due course ) .this would imply that the posterior mean is , meaning that we attempt to invert the data by applying the , generally discontinuous , operator ( * ? ? ?* proposition 2.7 ) .we study the consistency of the posterior in the frequentist setting . to this end, we consider data which is a realization of where is a fixed element of ; that is , we consider observations which are perturbations of the image of a fixed true solution by an additive noise , scaled by . since the posterior depends through its mean on the data and also through its covariance operator on the scaling of the noise and the prior , this choice of data model gives as posterior distribution the gaussian measure , where is given by ( [ eq : int4 ] ) and we study the behavior of the posterior as the noise disappears ( ) .our aim is to show that it contracts to a dirac measure centered on the fixed true solution . in particular, we aim to determine such that where the expectation is with respect to the random variable distributed according to the data likelihood . as in the deterministic theory of inverse problems , in order to get convergence in the small noise limit , we let the regularization disappear in a carefully chosen way , that is , we will choose such that as . the assumption that dominates , shows that is a singularly perturbed unbounded ( usually differential ) operator , with an inverse which blows - up in the limit .this together with equation ( [ eq : int7 ] ) , opens up the possibility of using the analysis of such singular limits to study posterior contraction : on the one hand , as , becomes unbounded ; on the other hand , as , we have more accurate data , suggesting that for the appropriate choice of we can get . in particular , we will choose as a function of the scaling of the noise , under the restriction that the induced choice of , is such that as .the last choice will be made in a way which optimizes the rate of posterior contraction , defined in ( [ eq : main1 ] ) . in generalthere are three possible asymptotic behaviors of the scaling of the prior as , : 1 . ; we increase the prior spread , if we know that draws from the prior are more regular than ; 2 . fixed ; draws from the prior have the same regularity as ; 3 . at a rate slower than ; we shrink the prior spread , when we know that draws from the prior are less regular than the problem of posterior contraction in this context is also investigated in and . in , sharp convergence ratesare obtained in the case where and are simultaneously diagonalizable , with eigenvalues decaying algebraically , and in particular , that is , the data are polluted by white noise . in this paperwe relax the assumptions on the relations between the operators and , by assuming that appropriate powers of them induce comparable norms ( see section [ sec : assumptions ] ) . in , the non - diagonal case is also examined ; the three operators involved are related through domain inclusion assumptions .the assumptions made in can be quite restrictive in practice ; our assumptions include settings not covered in , and in particular the case of white observational noise . in the following section we present our main results which concern the identification of the posterior ( theorem [ justth ] ) and the posterior contraction ( theorems [ pdecor1 ] and [ pdecor2 ] ) . in section [ sec : assumptions ] we present our assumptions and their implications .the proofs of the main results are built in a series of intermediate results contained in sections [ sec : prmean]-[sec : main ] . in section [ sec : prmean ] , we reformulate equation ( [ eq : int7 ] ) as a weak equation in an infinite - dimensional space . in section [ sec :justification ] , we present a new method of identifying the posterior distribution : we first characterize it through its radon - nikodym derivative with respect to the prior ( theorem [ prop0 ] ) and then justify the formulae ( [ eq : int4 ] ) , ( [ eq : int7 ] ) for the posterior covariance and mean ( proof of theorem [ justth ] ) . in section [ sec : normbounds ] , we present operator norm bounds for in terms of the singular parameter , which are the key to the posterior contraction results contained in section [ sec : main ] and their corollaries in section [ sec : mainresults ] ( theorems [ pdeth1 ] , [ pdeth2 ] and [ pdecor1 ] , [ pdecor2 ] ) . in section [ sec : ex ] , we present some nontrivial examples satisfying our assumptions and provide the corresponding rates of convergence . in section [ sec : diag ] , we compare our results to known minimax rates of convergence in the case where and are all diagonalizable in the same eigenbasis and have eigenvalues that decay algebraically .finally , section [ sec : conclusions ] is a short conclusion .the entire paper rests on a rich set of connections between the theory of stochastic processes and various aspects of the theory of linear partial differential equations .in particular , since the green s function of the precision operator of a gaussian measure corresponds to its covariance function , our formulation and analysis of the inverse problem via precision operators is very natural . furthermore , estimates on the inverse of singular limits of these precisions , which have direct implications for localization of the green s functions , play a key role in the analysis of posterior consistency .in this section we present our main results . we postpone the rigorous presentation of our assumptions to the next section and the proofs and technical lemmasare presented together with intermediate results of independent interest in sections [ sec : prmean ] - [ sec : main ] . recall that we assume a gaussian prior and a gaussian noise distribution .our first assumption concerns the decay of the eigenvalues of the prior covariance operator and enables us to quantify the regularity of draws from the prior .this is encoded in the parameter ; smaller implies more regular draws from the prior .we also assume that and , for some , where is s used in the manner outlined in section [ sec : intro ] , and defined in detail in section [ sec : assumptions ] .finally , we assume that the problem is sufficiently ill - posed with respect to the prior .this is quantified by the parameter we assume to be larger than ; for a fixed prior , the larger is , the more ill - posed the problem .our first main theorem identifies the posterior measure as gaussian and justifies formulae [ eq : int4 ] and [ eq : int7 ] .this reformulation of the posterior in terms of the precision operator is key to our method of analysis of posterior consistency and opens the route to using methods from the study of partial differential equations ( pdes ) .these methods will also be useful for the development of numerical methods for the inverse problem .[ justth ] under the assumptions [ a2 ] , the posterior measure is gaussian , where is given by ( [ eq : int4 ] ) and is a weak solution of ( [ eq : int7 ] ) .we now present our results concerning frequentist posterior consistency of the bayesian solution to the inverse problem .we assume to have data as in ( [ eq : int10 ] ) , and examine the behavior of the posterior , where is given by ( [ eq : int11 ] ) , as the noise disappears ( ) .the first convergence result concerns the convergence of the posterior mean to the true solution in a range of weighted norms induced by powers of the prior covariance operator .the spaces are rigorously defined in the following section .the second result provides rates of posterior contraction of the posterior measure to a dirac centered on the true solution as described in ( [ eq : main1 ] ) . in both results ,we assume a priori known regularity of the true solution and give the convergence rates as functions of .[ pdecor1 ] assume , where and let , where ] for 2 . if , for 3 .if and for and then the method does not give convergence .[ pdecor2]assume , where . under the assumptions [ a2 ] , we have the following optimized rates for the convergence in ( [ eq : main1 ] ) , where is arbitrarily small : 1 . if ] 6 .[ a2iv]; ] notice that , by assumption [ a2]([a2delta ] ) we have which , in combination with assumption [ a2]([a2i ] ) , implies that capturing the idea that the regularization through is indeed a regularization .in fact the assumption connects the ill - posedness of the problem to the regularity of the prior .we exhibit this connection in the following example : assume and are simultaneously diagonalizable , with eigenvalues having algebraic decay , and , respectively , for and so that is trace class. then assumptions ( [ a1]),([a2i])-([a2v ] ) are trivially satisfied with and .the assumption ( [ a2delta ] ) is then equivalent to .that is , for a certain degree of ill - posedness ( encoded in the difference ) we have a minimum requirement on the regularity of the prior ( encoded in ) .put differently , for a certain prior , we require a minimum degree of ill - posedness .we refer the reader to section [ sec : ex ] for nontrivial examples satisfying assumptions [ a2 ] . in the following ,we exploit the regularity properties of a white noise to determine the regularity of draws from the prior and the noise distributions using assumption [ a2]([a1 ] ) .we consider a white noise to be a draw from , that is a random variable . even though the identity operator is not trace class in , it is trace class in a bigger space , where is sufficiently large .[ l1]under the assumption [ a2]([a1 ] ) we have : 1 . let be a white noise .then for all .2 . let .then -a.s .for every . 1 .we have that , thus is equivalent to being of trace class . by the assumption it suffices to have .we have , where is a white noise , therefore using part ( i ) we get the result . [ remark1 ]note that as changes , both the hilbert scale and the decay of the coefficients of a draw from change .the norms are defined through powers of the eigenvalues . if , then has eigenvalues that decay like , thus an element has coefficients , that decay faster than .as gets closer to zero , the space for fixed , corresponds to a faster decay rate of the coefficients . at the same time , by the last lemma , draws from belong to for all .consequently , as gets smaller , not only do draws from belong to for smaller , but also the spaces for fixed reflect faster decay rates of the coefficients .the case corresponds to having eigenvalues that decay faster than any negative power of .a draw from in that case has coefficients that decay faster than any negative power of . in the next lemma, we use the interrelations between the operators to obtain additional regularity properties of draws from the prior , and also determine the regularity of draws from the noise distribution and the joint distribution of the unknown and the data .[ l2 ] under the assumptions [ a2 ] we have : 1 . -a.s .for all 2 .-a.s. 3 . -a.s .for all 4 . -a.s . for all . 1 .we can choose an as in the statement by the assumption [ a2]([a2delta ] ) .by lemma [ l1](ii ) , it suffices to show that .indeed , 2 . under assumption [ a2]([a2i ] )it suffices to show that .indeed , by lemma [ l1](ii ) , we need to show that , which is true since and we assume , thus 3 . it suffices to show it for any . noting that is a white noise , using assumption [ a2]([a2ii ] ), we have by lemma [ l1](i ) since .4 . by ( ii )we have that is -a.s . in the cameron - martin space of the gaussian measures and , thus the measures and are -a.s . equivalent ( * ? ? ?* theorem 2.8 ) and ( iii ) gives the result . the theory is naturally developed in the scale of hilbert spaces defined via the prior .however application of the theory may be more natural in a different functional setting .we explain how the two may be connected .let be an orthonormal basis of the separable hilbert space .we define the spaces as follows : for we set and the spaces are defined by duality , for example , if we restrict ourselves to functions on a periodic domain ^d ] .observe that , under the assumptions [ a2]([a1]),([a2delta]),([a2i]),([a2ii ] ) , for where sufficiently small , the lemma [ l2 ] implies on the one hand that -almost surely and on the other hand that -almost surely .[ asslem ] under the assumptions [ a2]([a1]),([a2i]),([a2iii]),([a2iv ] ) , for any ] .under the assumption [ a2]([a2i ] ) the following operator norm bounds hold : there is independent of such that and in particular , if , interpolation of the two bounds gives where ] and ] we have that let .then , for any , we have the decomposition where are the eigenfunctions of and since and , we have the first term on the right hand side is increasing in , while the second is decreasing , so we can optimize by choosing making the two terms equal , that is to obtain the claimed rate .in this section we employ the developments of the preceding sections to study the posterior consistency of the bayesian solution to the inverse problem . that is , we consider a family of data sets given by ( [ eq : int10 ] ) and study the limiting behavior of the posterior measure as . intuitively we would hope to recover a measure which concentrates near the true solution in this limit . following the approach in , , and , we quantify this idea as in ( [ eq : main1 ] ) . by the markov inequality we have so that it suffices to show that in addition to , there is a second small parameter in the problem , namely the regularization parameter , , and we will choose a relationship between and in order to optimize the convergence rates . we will show that determination of optimal convergence rates follows directly from the operator norm bounds on derived in the previous section , which concern only dependence ; relating to then follows as a trivial optimization .thus , the dependence of the operator norm bounds in the previous section forms the heart of the posterior contraction analysis .the relationship between and will induce a relationship between and , where being the scaling parameter in the prior covariance is the relevant parameter in the current bayesian framework .we now present our convergence results . in theorem [ pdeth1 ]we study the convergence of the posterior mean to the true solution in a range of norms , while in theorem [ pdeth2 ] we study the concentration of the posterior near the true solution as described in ( [ eq : main1 ] ) .the proofs of theorems [ pdeth1 ] and [ pdeth2 ] are provided later in the current section .the two main convergence results , theorems [ pdecor1 ] and [ pdecor2 ] follow as direct corollaries of remark [ r1 ] and theorems [ pdeth1 ] and [ pdeth2 ] respectively . [ pdeth1 ] let . under the assumptions [ a2 ] , we have that , for the choice and for any ] , chosen so that , for , where [ pdeth2]let . under the assumptions [ a2 ] , we have that , for , the convergence in ( [ eq : main1 ] ) holds with the result holds for any ] .thus the minimum requirement for convergence is in agreement to our assumption . on the other hand , to obtain the optimal rate ( which corresponds to choosing as small as possible ) we need to choose .if then the right hand side is negative so we have to choose , hence we can not achieve the optimal rate .we say that the method saturates at which reflects the fact that the true solution has more regularity than the method allows us to exploit to obtain faster convergence rates .2 . to get convergence we also need for a . by lemma [ l2](iii ) , it suffices to have .this means that we need , which holds by the assumption [ a2]([a2delta ] ) , in order to be able to choose . on the other hand , since and , we have that thus we can always choose in an optimal way , that is , we can always choose where is arbitrarily small .if we want draws from to be in then by lemma [ l1](ii ) we need .since the requirement for the method to give convergence is while , we can never have draws exactly matching the regularity of the prior . on the other handif we want an undersmoothing prior ( which according to in the diagonal case gives asymptotic coverage equal to 1 ) we need , which we always have .this , as discussed in section [ sec : intro ] , gives an explanation to the observation that in both of the above theorems we always have as .when in theorem [ pdeth2 ] and in theorem [ pdecor2 ] below , we get suboptimal rates .the reason is that our analysis to obtain the error in the -norm is based on interpolating between the error in the -norm and the error in the -norm . when interpolation is not possible since the -norm is now weaker than the -normhowever , we can at least bound the error in the -norm by the error in the -norm , thus obtaining a suboptimal rate .note , that the case does not necessarily correspond to the well posed case : by lemma [ l2 ] we can only guarantee that a draw from the noise distribution lives in , while the range of is formally .hence , in order to have a well posed problem we need , or equivalently . this can happen despite our assumption , when and for appropriate choice of and . in this case , regularization is unnecessary .note that , since the posterior is gaussian , the left hand side in ( [ eq : main2 ] ) is the square posterior contraction which is the sum of the mean integrated squared error ( mise ) and the posterior spread .let .by lemma [ i d ] , the relationship ( [ eq : int10 ] ) between and and the equation ( [ eq : int11 ] ) for , we obtain where the equations hold in , since by a similar argument to the proof of proposition [ meanlem ] we have . by subtractionwe get therefore as an equation in .using the fact that the noise has mean zero and the relation ( [ eq : int6 ] ) , equation ( [ eq : main5 ] ) implies that we can split the square posterior contraction into three terms provided the right hand side is finite .a consequence of the proof of theorem [ justth ] is that is trace class .note that for a white noise , we have that which for since by lemma [ l1 ] we have that , provides the bound where is independent of . if are chosen sufficiently large so that and then we see that where is independent of and . thus identifying in ( [ eq : main1 ] ) can be achieved simply through properties of the inverse of and its parametric dependence on . in the following, we are going to study convergence rates for the square posterior contraction , ( [ eq : main6 ] ) , which by the previous analysis will secure that for at a rate almost as fast as the square posterior contraction .this suggests that the error is determined by the mise and the trace of the posterior covariance , thus we optimize our analysis with respect to these two quantities . in the situation where and are diagonalizable in the same eigenbasis is studied , and it is shown that the third term in equation ( [ eq : main6 ] ) is bounded by the second term in terms of their parametric dependence on .the same idea is used in the proof of theorem [ pdeth2 ] .we now provide the proofs of theorem [ pdeth1 ] and theorem [ pdeth2 ] . since has zero mean , we have by ( [ eq : main5 ] ) using proposition [ pdelem1 ] and assumption [ a2]([a2v ] ) , we get since the common parenthesis term , consists of a decreasing and an increasing term in , we optimize the rate by choosing such that the two terms become equal , that is , .we obtain , by interpolating between the two last estimates we obtain the claimed rate . recall equation ( [ eq : main6]) the idea is that the third term is always dominated by the second term . combining equation ( [ eq : main7 ] ) with proposition [ pdelem3 ] , we have that .\nonumber\ ] ] 1 .suppose so that by proposition [ pdelem1 ] we have , where ] ; 3 . ] 4 . ] . the assumptions [ a2 ] are satisfied in this example .we have already seen that the first two assumptions are satisfied . 1 .we need to show that and are bounded operators in .indeed , which is bounded by lemma [ prolem ] applied for and .for we have , , which again by lemma [ prolem ] is the composition of two bounded operators .2 . since for , it suffices to show that it holds for all ] we have and . in particular , for any , if then for any ] . by the last proposition applied for and since is bounded , it suffices to show that and are bounded in .in fact it suffices to show that is bounded since indeed , since for general , let , then as before it suffices to show that is bounded in .again , using the fact that , we have by the product rule for derivatives that is bounded , provided .the operators are compact in , since they are compositions between the compact operator and the bounded operator .positivity of the operator and nonnegativity of the operator show that can not be an eigenvalue of , so that by the fredholm alternative ( * ? ? ?* , theorem 7 ) we have that are bounded in .in the case where and , are all diagonalizable in the same eigenbasis our assumptions are trivially satisfied , provided . in ,sharp convergence rates are obtained for the convergence in ( [ eq : main1 ] ) , in the case where the three relevant operators are simultaneously diagonalizable and have spectra that decay algebraically ; the authors only consider the case since in this diagonal setting the colored noise problem can be reduced to the white noise one .the rates in agree with the minimax rates provided the scaling of the prior is optimally chosen , . in figure[ fig ] ( cf .section [ sec : mainresults ] ) we have in green the rates of convergence predicted by theorem [ pdecor2 ] and in blue the sharp convergence rates from , plotted against the regularity of the true solution , , in the case where and has eigenvalues that decay like . in this case and , so that .as explained in remark [ r1 ] , the minimum regularity for our method to work is and our rates saturate at , that is , in this example at .we note that for $ ] our rates agree , up to arbitrarily small , with the sharp rates obtained in , for our rates are suboptimal and for the method fails . in , the convergence rates are obtained for and the saturation point is at , that is , in this example at . in general the pde methodcan saturate earlier ( if ) , at the same time ( if ) , or later ( if ) compared to the diagonal method presented in .however , the case in which our method saturates later , is also the case in which our rates are suboptimal , as explained in remark [ r1](iv ) .the discrepancies can be explained by the fact that in proposition [ pdelem1 ] , the choice of which determines both the minimum requirement on the regularity of and the saturation point , is the same for both of the operator norm bounds .this means that on the one hand to get convergence of the term in equation ( [ eq : main6 ] ) in the proof of theorem [ pdeth2 ] , we require conditions which secure the convergence in the stronger -norm and on the other hand the saturation rate for this term is the same as the saturation rate in the weaker -norm .for example , when the saturation rate in the pde method is the rate of the -norm hence we have the same saturation point as the rates in .in particular , we have agreement of the saturation rate when , which corresponds to the problem where we directly observe the unknown function polluted by white noise ( termed the _ white noise model _ ) .we have presented a new method of identifying the posterior distribution in a conjugate gaussian bayesian linear inverse problem setting ( section [ sec : mainresults ] and section [ sec : justification ] ) .we used this identification to examine the posterior consistency of the bayesian approach in a frequentist sense ( section [ sec : mainresults ] and section [ sec : main ] ) .we provided convergence rates for the convergence of the expectation of the mean error in a range of norms ( theorem [ pdeth1 ] , theorem [ pdecor1 ] ) .we also provided convergence rates for the square posterior contraction ( theorem [ pdeth2 ] , theorem [ pdecor2 ] ) .our methodology assumed a relation between the prior covariance , the noise covariance and the forward operator , expressed in the form of norm equivalence relations ( assumptions [ a2 ] ) .we considered gaussian noise which can be white . in order for our methods to work we required a certain degree of ill - posedness compared to the regularity of the prior ( assumption [ a2]([a2delta ] ) ) and for the convergence rates to be valid a certain degree of regularity of the true solution . in the case where the three involved operators are all diagonalizable in the same eigenbasis , when the problem is sufficiently ill - posed with respect to the prior , and for a range of values of , the parameter expressing the regularity of the true solution , our rates agree ( up to arbitrarily small ) with the sharp ( minimax ) convergence rates obtained in ( section [ sec : diag ] ) .our optimized rates rely on rescaling the prior depending on the size of the noise , achieved by choosing the scaling parameter in the prior covariance as an appropriate function of the parameter multiplying the noise .however , the relationship between and depends on the unknown regularity of the true solution , which raises the question how to optimally choose in practice .an attempt to address this question in a similar but more restrictive setting than ours is taken in , where an empirical bayes maximum likelihood based procedure giving a data driven selection of is presented .a different approach is taken in in the simultaneously diagonalizable case . as discussed in , for a fixed value of independent of ,the rates are optimal only if the regularity of the prior exactly matches the regularity of the truth . in ,an empirical bayes maximum likelihood based procedure and a hierarchical method are presented providing data driven choices of the regularity of the prior , which are shown to give optimal rates up to slowly varying terms .we currently investigate hierarchical methods with conjugate priors and hyperpriors for data driven choices of both the scaling parameter of the prior and the noise level .the methodology presented in this paper is extended to drift estimation for diffusion processes in .future research includes the extension to an abstract setting which includes both the present paper and as special cases .other possible directions are the consideration of nonlinear inverse problems , the use of non - gaussian priors and/or noise and the extension of the credibility analysis presented in to a more general setting .25 natexlab#1#1[2]#2 , , ( ) ., , , , volume of _ _ , , , . ., , , volume of _ _ , , , ., , volume of _ _ , , , ., , universitext , , , . ., , , ( ) . . , , , , volume of _ _ , , , . , , , ( ) . , , , , ( ) . , , , ( ) . , , , , ( ) . , , north - holland series in applied mathematics and mechanics , vol . 6 , , , . , , probability and its applications ( new york ) , , , . , , , , , http://arxiv.org/abs/1209.3628 , ., , , , ( ) . , , pure and applied mathematics ( new york ) , , , . , , , , ( ) ., , appunti .scuola normale superiore di pisa ( nuova serie ) .[ lecture notes .scuola normale superiore di pisa ( new series ) ] , , edition , ., , , , ( ) ., , cambridge texts in applied mathematics , , , . ., , , ( ) . , , ( ) . , , , ( ) ., , , ( ) . | we consider a bayesian nonparametric approach to a family of linear inverse problems in a separable hilbert space setting with gaussian noise . we assume gaussian priors , which are conjugate to the model , and present a method of identifying the posterior using its precision operator . working with the unbounded precision operator enables us to use partial differential equations ( pde ) methodology to obtain rates of contraction of the posterior distribution to a dirac measure centered on the true solution . our methods assume a relatively weak relation between the prior covariance , noise covariance and forward operator , allowing for a wide range of applications . posterior consistency , posterior contraction , gaussian prior , posterior distribution , inverse problems 62g20 , 62c10 , 35r30 , 45q05 |
we consider the hyperbolic conservation laws with the state vector u u(t , x ) : , for a time interval ] with the initial condition figure [ 2d_figure3 ] shows the rbf - eno solutions at various times at ( left figure ) and the pointwise errors by the eno ( blue ) and rbf - eno ( red ) methods with and at .the left figure clearly shows that the rbf - eno solution is not oscillatory yet yielding a sharp shock profile near the boundaries .the right figure shows that the rbf - eno method yields more accurate results than the regular eno method in the smooth region . by the eno ( blue ) and rbf - eno ( red ) on logarithmic scale with and ., title="fig:",scaledwidth=50.0% ] by the eno ( blue ) and rbf - eno ( red ) on logarithmic scale with and ., title="fig:",scaledwidth=40.0% ]in this paper , we developed a non - polynomial eno method for solving hyperbolic equations .as an example of non - polynomial bases , we used rbfs . the formulation based on the non - polynomial basis yields the flexibility of improving the original eno accuracy .the key idea of the developed method lies in the adaptation of the shape parameters in the expansion with a non - polynomial basis that can make the leading error term vanish or at least become small in the local interpolation .the new non - polynomial eno method improves local accuracy and convergence if the underlying solution is smooth . for the non - smooth solution such as a shock, we adopted the monotone interpolation method so that the non - polynomial eno reconstruction is reduced into the regular eno reconstruction resulting in the suppression of the gibbs oscillations .the numerical results show that the non - polynomial eno method is superior to the regular eno method and even better than the weno - js method for .the non - polynomial eno method yields order accuracy while the regular eno solution is only order accurate for in the smooth region .the numerical results show that the developed method yields highly accurate results for the scalar problems for both and . for the system problems ,the non - polynomial eno solutions are similar to or better than the regular eno solutions .the non - polynomial eno scheme is slightly more costly than the regular eno scheme because it has a procedure of computing the optimal shape parameter values .but it is less expensive than the weno scheme for the given value of .for some cases , the non - polynomial eno method achieves the same level of accuracy as the weno method or even better accuracy than the weno method while its computational cost is less demanding than the weno method .the weno method based on the non - polynomial eno reconstruction is also better then the regular weno method .the 2d non - polynomial finite volume interpolation is more beneficial than the 2d polynomial interpolation .we showed that the non - polynomial interpolation can raise the order which can not be obtained with the polynomial interpolation even though all the cell averages are used .we provided the table of the reconstruction coefficients for and for the non - polynomial eno method .the non - polynomial eno formulation for higher values of will be considered in our future work .the current work considered the uniform mesh only . in our future work , we will investigate the non - polynomial eno method with the nonuniform and unstructured mesh .as mentioned in introduction , the meshless feature of rbfs was combined with the weno method in where the shape parameter was globally fixed for the reconstruction .it will be interesting to investigate how the optimization of the shape parameter can be realized with the meshless properties of rbfs on the unstructured mesh ..1 in * acknowledgments : *the authors thank w .- s .don for his useful comments on the construction of the rbf - eno / weno method .the second author thanks grady wright for his useful comments on the rbf interpolation . ,essentially non - oscillatory and weighted essentially non - oscillatory schemes for hyperbolic conservation laws , advanced numerical approximation of nonlinear hyperbolic equations " ( lecture notes in mathematics 1697 ) , a. quarteroni ( ed . ) , springer - verlag , 1998 . | the essentially non - oscillatory ( eno ) method is an efficient high order numerical method for solving hyperbolic conservation laws designed to reduce the gibbs oscillations , if existent , by adaptively choosing the local stencil for the interpolation . the original eno method is constructed based on the polynomial interpolation and the overall rate of convergence provided by the method is uniquely determined by the total number of interpolation points involved for the approximation . in this paper , we propose simple non - polynomial eno and weighted eno ( weno ) finite volume methods in order to enhance the local accuracy and convergence . we first adopt the infinitely smooth radial basis functions ( rbfs ) for a non - polynomial interpolation . particularly we use the multi - quadric and gaussian rbfs . the non - polynomial interpolation such as the rbf interpolation offers the flexibility to control the local error by optimizing the free parameter . then we show that the non - polynomial interpolation can be represented as a perturbation of the polynomial interpolation . that is , it is not necessary to know the exact form of the non - polynomial basis for the interpolation . in this paper , we formulate the eno and weno methods based on the non - polynomial interpolation and derive the optimization condition of the perturbation . to guarantee the essentially non - oscillatory property , we switch the non - polynomial reconstruction to the polynomial reconstruction adaptively near the non - smooth area by using the monotone polynomial interpolation method . the numerical results show that the developed non - polynomial eno and weno methods enhance the local accuracy . * keywords * essentially non - oscillatory method , weighted essentially non - oscillatory method , radial basis function interpolation , finite volume method , hyperbolic conservation laws . |
let be a bounded and simply - connected polyhedral domain in with boundary and , and let be the outward unit vector normal to the boundary. denote by the electric field , we consider the following model problem originated from a second order hyperbolic equation by eliminating the magnetic field in maxwell s equations : { \bm{u}}{\!\times\!}{\bm{n}}&=&{\bm{g}}_{_d } , & \ ; \text { on } \ , { \gamma}_d , \\[2 mm ] ( \mu^{-1 } { \nabla { \!\times\!}}{\bm{u } } ) { \!\times\!}{\bm{n } } & = & { \bm{g}}_{_n } , & \ ; \text { on } \ , { \gamma}_n , \end{array } \right.\ ] ] where is the curl operator ; the , , and are given vector fields which are assumed to be well - defined on , , and , respectively ; the is the magnetic permeability ; and the depends on the electrical conductivity , the dielectric constant , and the time step size .assume that the coefficients and are bounded below for almost all .the _ a posteriori _ error estimation for the conforming finite element approximation to the problem in has been studied recently by several researchers .several types of _ a posteriori _ error estimators have been introduced and analyzed .these include residual - based estimators and the corresponding convergence analysis ( explicit , and implicit ) , equilibrated estimators , and recovery - based estimators .there are four types of errors in the explicit residual - based estimator ( see ) .two of them are standard , i.e. , the element residual , and the interelement face jump induced by the discrepancy induced by integration by parts associated with the original equation in .the other two are also the element residual and the interelement face jump , but associated with the divergence of the original equation : , where is the divergence operator .these two quantities measure how good the approximation is in the kernel space of the curl operator .recently , the idea of the robust recovery estimator explored in for the diffusion interface problem has been extended to the interface problem in .instead of recovering two quantities in the continuous polynomial spaces like the extension of the popular zienkiewicz - zhu ( zz ) error estimator in , two quantities related to and are recovered in the respective - and -conforming finite element spaces .the resulting estimator consists of four terms similar to the residual estimator in the pioneering work on this topic by beck , hiptmair , hoppe , and wohlmuth : two of them measure the face jumps of the tangential components and the normal component of the numerical approximations to and , respectively , and the other two are element residuals of the recovery type .all existing a posteriori error estimators for the problem assume that the right - hand side is in or divergence free .this assumption does not hold in many applications ( e.g. the implicit marching scheme mentioned in ) .moreover , two terms of the estimators are associated with the divergence of the original equation . in the proof , these two terms come to existence up after performing the integration by parts for the irrotational gradient part of the error , which lies in the kernel of the curl operator .one of the key technical tools , a helmholtz decomposition , used in this proving mechanism , relies on being in , and fails if . in , the assumption that is weakened to being in the piecewise space with respect to the triangulation , at the same time, the divergence residual and norm jump are modified to incorporate this relaxation .another drawback of using helmholtz decomposition on the error is that it introduces the assumption of the coefficients quasi - monotonicity into the proof pipeline .an interpolant with a coefficient independent stability bound is impossible to construct in a `` checkerboard '' scenario ( see for diffusion case , and for case ) . to gain certain robustness for the error estimator in the proof, one has to assume the coefficients distribution is quasi - monotone .however , in an earlier work of chen , xu , and zou ( ) , it is shown that numerically this quasi - monotonicy assumption is more of an artifact introduced by the proof pipeline , at least for the irrotational vector fields . as a result, we conjecture that the divergence related terms should not be part of an estimator if it is appropriately constructed . in section [ sec : numex ] , some numerical justifications are presented to show the unnecessity of including the divergence related terms . the pioneering work in using the dual problems for a posteriori error estimation dates back to . in ,oden , demkowicz , rachowicz , and westermann studied the a posteriori error estimation through duality for the diffusion - reaction problem .the finite element approximation to a dual problem is used to estimate the error for the original primal problem ( diffusion - reaction ) .the result shares the same form to the prague - synge identity ( ) for diffusion - reaction problem .the method presented in this paper may be viewed as an extension of the duality method in to the interface problem . the auxiliary magnetizing field introduced in section [ sec : aux] is the dual variable resembling the flux variable in .the connection is illustrated in details in section [ sec : dual ] .later , repin ( ) proposes a functional type a posteriori error estimator of problem , which can be viewed as an extension of the general approach in .repin et al ( ) improve the estimate by assuming that the data is divergence free and the finite element approximation is in . in ,the upper bound is established through integration by parts by introducing an auxiliary variable in an integral identity for .an auxiliary variable is recovered by globally solving an finite element approximation problem and is used in the error estimator . for the global lower bound, the error equation is solved globally in an conforming finite element space .then the solution is inserted into the functional as the error estimator of which the maximizer corresponds to the solution to the error equation .the purpose of this paper is to develop a novel a posteriori error estimator for the conforming finite element approximation to the problem in ( [ eq : pb - ef - weak ] ) that overcomes the above drawbacks of the existing estimators , e.g. the helmholtz decomposition proof mechanism , restricted by the assumption that or divergence free , which brings in the divergence related terms .specifically , the estimator studied in this paper is of the recovery type , requires the right - hand side merely having a regularity of , and has only two terms that measure the element residual and the tangential face jump of the original equation .based on the current approximation to the primary variable ( the electric field ) , an auxiliary variable ( the magnetizing field ) is recovered by approximating a similar auxiliary problem . to this end, a multigrid smoother is used to approximate this auxiliary problem , which is independent of the primary equation and is performed in parallel with the primary problem .the cost is the same order of complexity with computing the residual - based estimator , which is much less than solving the original problem .an alternate route is illustrated as well in section [ sec : loc ] by approximating a localized auxiliary problem . while embracing the locality , the parallel nature using the multigrid smootherthe recovery through approximating localized problem requires the user to provide element residual and tangential face jump of the numerical magnetizing field based on the finite element solution of the primary equation .the estimator is then defined as the sum of the modified element residual and the residual of the auxiliary constitutive equation .it is proved that the estimator is equal to the true error in the energy norm globally . moreover , in contrast to the mechanism of the proof using helmholtz decomposition mentioned previously , the decomposition is avoided by using the joint energy norm . as a result , the new estimator s reliability does not rely on the coefficients distribution ( theorem [ th : rel ] ) .meanwhile , in this paper , the method and analysis extend the functional - type error estimator in to a more pragmatic context by including the mixed boundary conditions , and furthermore , the auxiliary variable is approximated by a fast multigrid smoother , or by solving a localized problem on vertex patches , to avoid solving a global finite element approximation problem .lastly , in order to compare the new estimator introduced in this paper with existing estimators , we present numerical results for intersecting interface problems .when , the mesh generated by our indicator is much more efficient than those by existing indicators ( section [ sec : numex ] ) .denote by the space of the square integrable vector fields in equipped with the standard norm : , where denotes the standard inner product over an open subset , when , the subscript is dropped for and .let which is a hilbert space equipped with the norm denote its subspaces by \mbox{and } \quad { { { \mathop{{}{\bm{h}}}\limits^{\vbox to -.1ex{\kern -0.5ex\hbox{}\vss}}}}_{b}({\mathbf{curl}\hspace{0.7pt}};\omega ) } : = \{{\bm{v}}\in { { \bm{h}}_{b}({\mathbf{curl}\hspace{0.7pt}};\omega ) } : { \bm{g}}_{_b}={{\bf 0}}\ } \end{array}\ ] ] for or .for any , multiplying the first equation in by a suitable test function with vanishing tangential part on , integrating over the domain , and using integration by parts formula for -regular vector fields ( e.g. see ) , we have & = & ( \mu^{-1}{\nabla { \!\times\!}}{\bm{u}},\,{\nabla { \!\times\!}}{\bm{v}})+ ( { \beta}\,{\bm{u}},\,{\bm{v } } ) - \int_{\gamma_{n}}{\bm{g}}_{_n } \cdot { \bm{v}}\,ds.\end{aligned}\ ] ] then the weak form associated to problem is to find such that where the bilinear and linear forms are given by respectively . here , denotes the duality pair over .denote by the `` energy '' norm induced by the bilinear form .[ th : pb - ef - weak ] assume that , , and .then the weak formulation of has a unique solution satisfying the following a priori estimate for the notations and proof , see the appendix [ appendix ] . for simplicity of the presentation , only the tetrahedral elements are considered .let be a finite element partition of the domain .denote by the diameter of the element .assume that the triangulation is regular and quasi - uniform .let where is the space of polynomials of degree less than or equal to .let and be the spaces of homogeneous polynomials of scalar functions and vector fields. denote by the first or second kind ndlec elements ( e.g. see ) for , respectively , where the local ndlec elements are given by \mbox{and}\quad & { \bm{{{\mathcal{n}}}\!{{\mathcal{d}}}}}^{k,2}(k ) = \{{\bm{p}}+ \nabla s : { \bm{p}}\in { \bm{{{\mathcal{n}}}\!{{\mathcal{d}}}}}^{k,1}(k ) , s\in { \widetilde}{p}_{k+2}(k ) \}.\end{aligned}\ ] ] for simplicity of the presentation , we assume that both boundary data and are piecewise polynomials , and the polynomial extension ( see ) of the dirichlet boundary data as the tangential trace is in .now , the conforming finite element approximation to ( [ eq : pb - ef ] ) is to find such that assume that and are the solutions of the problems in ( [ eq : pb - ef ] ) and ( [ eq : pb - ef - fem ] ) , respectively , and that , ( when the regularity assumption is not met , one can construct a curl - preserving mollification , see ) , by the interpolation result from chapter 5 and ca s lemma , one has the following a priori error estimation : where is a positive constant independent of the mesh size .introducing the magnetizing field then the first equation in ( [ eq : pb - ef ] ) becomes the boundary condition on may be rewritten as follows for any , multiplying equation ( [ eq : pb - mf - aux ] ) by , integrating over the domain , and using integration by parts and ( [ eq : pb - mf ] ) , we have & = & ( { \beta}^{-1}{\nabla { \!\times\!}}{\bm{\sigma}},\,{\nabla { \!\times\!}}{\bm{\tau}})+ ( { \nabla { \!\times\!}}{\bm{u}},\,{\bm{\tau } } ) \\ & & \ ; + \int_{\gamma_d } ( { \bm{u}}{\!\times\!}{\bm{n } } ) \cdot { \bm{\tau}}\,ds -\int_{\gamma_n } { \bm{u}}\cdot ( { \bm{\tau}}{\!\times\!}{\bm{n}})\,ds \\[1 mm ] & = & ( { \beta}^{-1}{\nabla { \!\times\!}}{\bm{\sigma}},\,{\nabla { \!\times\!}}{\bm{\tau}})+ ( \mu\,{\bm{\sigma}},\,{\bm{\tau } } ) + \int_{\gamma_{d}}{\bm{g}}_{_d } \cdot { \bm{\tau}}\,ds.\end{aligned}\ ] ] hence , the variational formulation for the magnetizing field is to find such that where the bilinear and linear forms are given by respectively .the natural boundary condition for the primary problem becomes the essential boundary condition for the auxiliary problem , while the essential boundary condition for the primary problem is now incorporated into the right - hand side and becomes the natural boundary condition .denote the `` energy '' norm induced by by [ th : pb - mf - weak ] assume that , , and .then problem _ _ has a unique solution satisfying the following a priori estimate the theorem may be proved in a similar fashion as theorem [ th : pb - ef - weak ] .similarly to that for the essential boundary condition , it is assumed that the polynomial extension of the neumman boundary data as the tangential trace is in as well .now , the conforming finite element approximation to ( [ eq : pb - mf - weak ] ) is to find such that assume that and are the solutions of the problems in ( [ eq : pb - mf ] ) and ( [ eq : pb - mf - fem ] ) , respectively , and that , , one has the following a priori error estimation similar to the a priori estimate shows that heuristically , for the auxiliary magnetizing field , using the same order -conforming finite element approximation spaces with the primary variable may be served as the building blocks for the a posteriori error estimation .the localization of the recovery of for this new recovery shares similar methodology with the one used in the equilibrated flux recovery ( see ) .however , due to the presence of the -term , exact equilibration is impossible due to several discrepancies : if and are in ndlec spaces of the same order ; if is used for and for , the inter - element continuity conditions come into the context in that , which has different inter - element continuity requirement than . due to these two concerns ,the local problem is approximated using a constraint -minimization .let be the correction from to the true magnetizing field : .now can be decomposed using a partition of unity : let be the linear lagrange nodal basis function associated with a vertex , which is the collection of all the vertices , denote .let the vertex patch , where is the collection of vertices of element .then the following local problem is what the localized magnetizing field correction satisfies : with the following jump condition on each interior face , and boundary face : the element residual is , and the tangential jump is . to find the correction , following piecewise polynomial spacesare defined : & \bm{{\mathcal{w}}}^k({{\mathcal{f}}}_{{\bm{z}}})= \ { { \bm{\tau}}\in { \bm{l}}^2({{\mathcal{f}}}_{{\bm{z}}}):\ , { \bm{\tau}}{\big\vert_{\raisebox{-0.5pt}{\scriptsize } } } \in { \bm{{\mathcal{r}}\!{{\mathcal{t}}}}}^k(f),\ ; \forall f\in{{\mathcal{f}}}_{{\bm{z } } } ; \\ & \hspace{2em } { \bm{\tau}}{\big\vert_{\raisebox{-0.5pt}{\scriptsize } } } \cdot ( { \bm{t}}_{ij}{\!\times\!}{\bm{n}}_{i } ) = { \bm{\tau}}{\big\vert_{\raisebox{-0.5pt}{\scriptsize } } } \cdot ( { \bm{t}}_{ij}{\!\times\!}{\bm{n}}_{j } ) , \forall f_i , f_j\in { { \mathcal{f}}}_{{\bm{z } } } , { \partial}f_i \cap { \partial}f_j = e_{ij } \ } , \\[3pt ] & \bm{{\mathcal{h}}}_{{\bm{z } } } = \{{\bm{\tau}}\in { \bm{{{\mathcal{n}}}\!{{\mathcal{d}}}}}^{k}_{-1}({\omega_{\bm{z } } } ) : \ ; { { \lbrack\hspace{-1.5pt}\lbrack}{{\bm{\tau}}{\!\times\!}{\bm{n}}_f } { \rbrack\hspace{-1.5pt}\rbrack}_{\raisebox{-2pt}{\scriptsize } } } = - { \overline}{{\bm{j}}}_{f,{\bm{z } } } \ ; \forall\ , f \in { { \mathcal{f}}}_{{\bm{z}}}\ } , \\[3pt ] \text{and } \ ; & \bm{{\mathcal{h}}}_{{{\bf 0 } } , { \bm{z } } } = \{{\bm{\tau}}\in \bm{{\mathcal{h}}}_{{\bm{z } } } : \ , { \bm{\tau}}{\!\times\!}{\bm{n}}_f{\big\vert_{\raisebox{-0.5pt}{\scriptsize } } } = { { \bf 0 } } , \ ; \forall\ ; f\ , \subset { \omega_{\bm{z}}}\}. \end{aligned}\ ] ] here is the planar raviart - thomas space on a given face , of which the degrees of freedom can be defined using conormal of an edge with respect to the face normal .for example , is the unit tangential vector of edge joining face and , then the conormal vector of with respect to face is . can be viewed as the trace space of the broken ndlec space . for detailplease refer to section 4 and 5 in . to approximate the local correction for magnetizing field , and projected onto proper piecewise polynomial spaces . to this end , let where is the projection onto the space , and is the projection onto the space . dropping the uncomputable terms in , and using as a constraint , the following local -minimization problem is to be approximated : the hybridized problem associated with above minimization is obtained by taking variation with respect to of the functional by the tangential face jump as a lagrange multiplier : for any , using the fact that , and on as a result , the local approximation problem is : & a_{{\beta},\mu;{{\bm{z}}}}\big({{\bm{\sigma}}^{{\delta}}_{{\bm{z}},{\raisebox{-0.5pt}{}}}},{{\bm{\tau}}}\big ) + b_{{\bm{z } } } \big({\bm{\tau } } , { \bm{\theta}}_{{\bm{z}}}\big ) = { \bigl ( { { \beta}^{-1 } { \overline}{{\bm{r}}}_{k,{\bm{z } } } } , \,{{\nabla { \!\times\!}}{\bm{\tau } } } \bigr)}_{{\omega_{\bm{z } } } } , \quad \forall\ , { \bm{\tau}}\in { \bm{{{\mathcal{n}}}\!{{\mathcal{d}}}}}^k_{-1}({\omega_{\bm{z } } } ) , \\[0.5em ] & \qquad \qquad \qquad\quad b_{{\bm{z } } } \big({{\bm{\sigma}}^{{\delta}}_{{\bm{z}},{\raisebox{-0.5pt}{ } } } } , { \bm{\gamma}}\big ) = -\sum_{f\in{{\mathcal{f}}}_{{\bm{z } } } } { \bigl ( { { \overline}{{\bm{j}}}_{f,{\bm{z}}}},\,{{\bm{\gamma } } } \bigr)}_f , \quad \forall\ , { \bm{\gamma}}\in \bm{{\mathcal{w}}}^k({{\mathcal{f}}}_{{\bm{z } } } ) , \end{aligned } \right.\ ] ] wherein the local bilinear forms are defined as follows : \text { and } \ ; b_{{\bm{z } } } ( { \bm{\tau}},{\bm{\gamma } } ) & : = \sum_{f\in{{\mathcal{f}}}_{{\bm{z } } } } { \bigl ( { { { \lbrack\hspace{-1.5pt}\lbrack}{{\bm{\tau}}{\!\times\!}{\bm{n}}_f } { \rbrack\hspace{-1.5pt}\rbrack}_{\raisebox{-2pt}{\scriptsize } } } } , \,{{\bm{\gamma } } } \bigr)}_f .\end{aligned}\ ] ] problem has a unique solution . for a finite dimensional problem ,uniqueness implies existence .it suffices to show that letting both the right hand sides be zeros results trivial solution .first by for any ( direct implication of proposition 4.3 and theorem 4.4 in ) , setting in the second equation of immediately implies that . as a result , .now let in the first equation of , since induces a norm in , .for , it suffices to show that on each if using theorem 4.4 in , if is non - trivial and satisfies above equation , there always exists a such that . as a result , , which is a contradiction .thus , the local problem is uniquely solvable . with the local correction to the magnetizing field , for all , computed above ,let then the recovered magnetizing field is this section , we study the following a posteriori error estimator : where the local indicator is defined by it is easy to see that the and are the finite element approximations in problems and respectively . with the locally recovered , the local error indicator and the global error estimator are defined in the same way as and : and in practice , does not have to be the finite element solution of a global problem . in the numerical computation ,the hiptmair - xu multigrid preconditioner in is used for discrete problem with two multigrid v - cycles for each component of the vector laplacian , and two multigrid v - cycles for the kernel part of the curl operator .the used to evaluate the estimator is the pcg iterate .the computational cost is the same order with computing the explicit residual based estimator in .generally speaking , to approximate the auxiliary problem , the same black - box solver for the original problem can be applied requiring minimum modifications .for example , if the boomeramg in _ hypre _( ) is used for the discretizations of the primary problem , then the user has to provide exactly the same discrete gradient matrix and vertex coordinates of the mesh , and in constructing the the hx preconditioner , the assembling routines for the vector laplacian and scalar laplacian matrices can be called twice with only the coefficients input switched .[ th : rel ] locally , the indicator and both have the following efficiency bound for all .the estimator and satisfy the following global upper bound denote the true errors in the electric and magnetizing fields by respectively .it follows from ( [ eq : pb - mf ] ) , ( [ eq : pb - mf - aux ] ) , and the triangle inequality that \nonumber & \leq & \left({\left\|{\mu^{1/2 } { \bm{e}}}\right\|}_{k } ^2 + { \left\|{\mu^{-1/2}{\nabla { \!\times\!}}{\bm{e}}}\right\|}_{k } ^2 + { \left\|{{\beta}^{-1/2}{\nabla { \!\times\!}}{\bm{e}}}\right\|}^2_k + { \left\|{{\beta}^{1/2}{\bm{e}}}\right\|}_{k}^2\right ) \\[2mm]\nonumber & = & \left({\{\mathopen{|\mkern-2.5mu|\mkern-2.5mu| } s \mathclose{|\mkern-2.5mu|\mkern-2.5mu| } } { \mathopen{|\mkern-2.5mu|\mkern-2.5mu| } } \mathclose{|\mkern-2.5mu|\mkern-2.5mu| } } { { \bm{e}}}_{\mu,{\beta},k}^2 + { \{\mathopen{|\mkern-2.5mu|\mkern-2.5mu| } s \mathclose{|\mkern-2.5mu|\mkern-2.5mu| } } { \mathopen{|\mkern-2.5mu|\mkern-2.5mu| } } \mathclose{|\mkern-2.5mu|\mkern-2.5mu| } } { { \bm{e}}}_{\beta,\mu , k}^2\right),\end{aligned}\ ] ] which implies the validity of for . for , the exact same argument follows except by switching by locally recovered . to prove the global identity in , summing over all gives & = & { \{\mathopen{|\mkern-2.5mu|\mkern-2.5mu| } s \mathclose{|\mkern-2.5mu|\mkern-2.5mu| } } { \mathopen{|\mkern-2.5mu|\mkern-2.5mu| } } \mathclose{|\mkern-2.5mu|\mkern-2.5mu| } } { { \bm{e}}}_{\mu,{\beta}}^2 + { \{\mathopen{|\mkern-2.5mu|\mkern-2.5mu| } s \mathclose{|\mkern-2.5mu|\mkern-2.5mu| } } { \mathopen{|\mkern-2.5mu|\mkern-2.5mu| } } \mathclose{|\mkern-2.5mu|\mkern-2.5mu| } } { { \bm{e}}}_{\beta,\mu}^2 -2({\bm{e}},\,{\nabla { \!\times\!}}{\bm{e}})+ 2({\nabla { \!\times\!}}{\bm{e}},\,{\bm{e}}).\end{aligned}\ ] ] now , follows from the fact that lastly , the global upper bound for the locally recovered follows from the fact that and are the solutions to the following global problem : as a result , which is the global minimum achieved in the finite element spaces .this completes the proof of the theorem .in theorem [ th : rel ] it is assumed that the boundary data are admissible so that they can be represented as tangential traces of the finite element space .if this assumption is not met , it can be still assumed that divergence - free extension of its tangential trace to each boundary tetrahedron on is at least -regular ( ) , and as well ( ) , so that the conventional edge interpolant is well - defined ( e.g. see chapter 5 ) . when the same assumption is applied to and ,the reliability bound derived by still holds ( for notations please refer to appendix [ appendix ] ) : using the fact that and are approximated by the conventional edge interpolants on and on respectively yields : by the interpolation estimates for boundary elements together with the weighted trace inequalities from appendix [ appendix ] , the reliability constant is not harmed if the interface singularity does not touch the boundary .a posteriori error estimation by the duality method for the diffusion - reaction problem was studied by oden , demkowicz , rachowicz , and westermann in . in this section ,we describe the duality method for the problem and its relation with the estimator defined in ( [ eq : eta ] ) . to this end , define the energy and complimentary functionals by \mbox{and}\quad { { \mathcal{j}}}^*({\bm{\tau } } ) & = & -\frac{1}{2}(\mu\,{\bm{\tau}},\,{\bm{\tau } } ) - \frac{1}{2 } { \bigl ( { { \beta}^{-1}({\bm{f}}- { \nabla { \!\times\!}}{\bm{\tau}})},\,{{\bm{f}}-{\nabla { \!\times\!}}{\bm{\tau } } } \bigr ) } -{{\left\langle { { \bm{g}}_{_d}},{{\bm{\tau } } } \right\rangle}}_{{\gamma}_d},\end{aligned}\ ] ] respectively. then problems ( [ eq : pb - ef - weak ] ) and ( [ eq : pb - mf - weak ] ) are equivalent to the following minimization and maximization problems : respectively . by the duality theory for a lower semi - continuous convex functional ( see e.g. ) , we have a simple calculation gives that the true errors of the finite element approximations in the `` energy '' norm can be represented by the difference between the functional values as follows : hence , the `` energy '' error in the finite element approximation is bounded above by the estimator defined in ( [ eq : eta ] ) ( and the locally - recovered as well ) : & \leq & 2\big ( { { \mathcal{j}}}({\bm{u}}_{_{{\mathcal{t } } } } ) - { { \mathcal{j}}}^*({\bm{\sigma}}_{_{{\mathcal{t } } } } ) \big)= \eta^2,\end{aligned}\ ] ] where the last equality is obtained by evaluating through integration by parts .note that the above calculation indicates which leads us back to the identity on the global reliability in .in this section , we present numerical results for interface problems , i.e. , the problem parameters and in ( [ eq : pb - ef ] ) are piecewise constants with respect to a partition of the domain .assume that interfaces do not cut through any element .the is solved in , and the is recovered in as well .the numerical experiments are prepared using ` delaunaytriangulation ` in matlab for generating meshes , l. chen s ifem ( ) for the adaptively refining procedure , the ` matlab2hypre ` interface in blopex ( ) for converting sparse matrices , and mfem ( ) to set up the serial version of auxiliary - space maxwell solver ( ams ) in _ hypre _ ( ) as preconditioners .we compare numerical results generated by adaptive finite element method using following error estimators : * the new indicator defined in , and its locally - recovered sibling defined in . * the residual - based indicator introduced in with the appropriate weights for piecewise constant coefficients defined in : & + \sum_{f\in { { \mathcal{f}}}_h(k ) } \frac{h_f}{2 } \left ( { \beta}_f^{-1 } { \left\|{{{\lbrack\hspace{-1.5pt}\lbrack}{{\beta}{\bm{u}}_{_{{\mathcal{t}}}}\cdot { \bm{n}}_f } { \rbrack\hspace{-1.5pt}\rbrack}_{\raisebox{-2pt}{\scriptsize } } } } \right\|}_{l^2(f)}^2 + \mu_f{\left\|{{{\lbrack\hspace{-1.5pt}\lbrack}{(\mu^{-1 } { \nabla { \!\times\!}}{\bm{u}}_{_{{\mathcal{t}}}}){\!\times\!}{\bm{n } } } { \rbrack\hspace{-1.5pt}\rbrack}_{\raisebox{-2pt}{\scriptsize } } } } \right\|}_{{\bm{l}}^2(f)}^2 \right ) , \end{aligned}\ ] ] * the recovery - based indicator presented in : & \qquad + { \left\|{\mu^{1/2}{\bm{\sigma}}_{_{{\mathcal{t}}}}-\mu^{-1/2 } { \nabla { \!\times\!}}{\bm{u}}_{_{{\mathcal{t } } } } } \right\|}_{{\bm{l}}^2(k)}^2 , \end{aligned}\ ] ] where and are the recoveries of and , respectively . in our computation , the energy norms are used for the estimators and and the estimator , respectively . the respective relative errors and effectivity indices are computed at each iteration by for the estimators and and by the estimator and , where .in all the experiements , the lowest order ndlec element space is used , and , hence , the optimal rate of convergence for the adaptive algorithm is .* example 1 * : this is an example similar to that in with a few additions and tweaks , in which the kellogg intersecting interface problem is adapted to the -problem . the computational domain is a slice along -direction : with .let be a piecewise constant given by the exact solution of ( [ eq : pb - ef ] ) is given in cylindrical coordinates : where is a continuous function defined by here we set parameters to be the initial mesh is depicted in figure [ fig : ex1-init ] which is aligned with four interfaces . 0.5 cm it is easy to see that the exact solution of the auxiliary problem in ( [ eq : pb - mf ] ) for this example is .hence , the true error for the finite element approximation to ( [ eq : pb - mf ] ) is simply the energy norm of the finite element solution defined in ( [ eq : pb - mf - weak ] ) in the first experiment , we choose the coefficients and .this choice enables that , i.e. , , and that satisfies the -weighted normal continuity : for any surface in the domain .this is the prerequisite for establishing efficiency and reliability bounds in and and the base for recovering in in .the quasi - monotonicity assumption is not met in this situation ( for the analysis of the quasi - monotonicity affects the robustness of the estimator for problems , please refer to ) .the meshes generated by , , and are almost the same ( see figure [ fig : ex1-hdiv - mesh ] ) . in terms of the convergence , we observe that the error estimator exhibits asymptotical exactness .this is impossible for the error estimators in and because of the presence of the element residuals .table 1 shows that the number of the dof for the is about less than those of the other two estimators while achieving a better accuracy . as the reliability of the estimator does not depend on the quasi - monotonicity of the coefficient , the rate of the convergence is not hampered by checkerboard pattern of the . .estimators comparison , example 1 , [ cols="^,^,^,^,^,^,^",options="header " , ] [ table : ex3 - 1 ] in the second experiment , the is chosen to be : we test the case where .similar to example 1 , the necessary tangential jump conditions across the interfaces for the primary problem are satisfied . yetthe choice of implies that the right hand side .using the residual - based or recovery - based estimator will again lead to unnecessary over - refinement along the interfaces ( see figure [ fig : ex3-mesh ] ) , and the order of convergence is sub - optimal than the optimal order for linear elements ( see table [ table : ex3 - 1 ] and figure [ fig : ex3-conv ] ) . 0.5 cm 0.5 cm the new estimator in this paper shows convergence in the optimal order no matter how we set up the jump of the coefficients .the conclusion of comparison with the other two estimators remains almost the same with example 1 . in this example , the differences are more drastic : the degrees of freedom for the new estimator to get roughly the same level of approximation with the other two .in this appendix , an a priori estimate for the mixed boundary value problem with weights is studied following the arguments and notations mainly from . in our study , it is found that , due to the duality pairing on the neumann boundary and the nature of the trace space of , a higher regularity is needed for the neumann boundary data than those for elliptic mixed boundary value problem . first we define the tangential trace operator and tangential component projection operator , and their range acting on the .secondly we construct a weighted extension of the dirichlet boundary data to the interior of the domain .lastly the a priori estimate for the solution of problem is established after a trace inequality is set up for the piecewise smooth vector field . \bm{0 } & \mbox{on } \ , { \partial}{\omega}\backslash { \overline}{{\gamma}}_b , \end{array}\right .\mbox{and}\,\,\ , \pi_{\top , b } : { \bm{v}}\mapsto \left\{\begin{array}{ll } { \bm{n}}{\!\times\!}({\bm{v}}{\!\times\!}{\bm{n } } ) & \mbox{on } \,{\gamma}_b , \\[2 mm ] \bm{0 } & \mbox{on } \,{\partial}{\omega}\backslash { \overline}{{\gamma}}_b , \end{array}\right . \ ] ] define the following spaces as the trace spaces of : for the -regular vector fields , define the trace spaces and as : it is proved in that the tangential trace space and the tangential component space can be characterized by the supscripted spaces and are defined as the dual spaces of and . [assumption : bd ] let the dirichlet or neumann boundary ( or ) be decomposed into simply - connected components : . for any , there exists a single , such that .assumption [ assumption : bd ] is to say , each connected component on the dirichlet or neumann boundary only serves as the boundary of exactly one subdomain .assumption [ assumption : bd ] is here solely for the a priori error estimate .the robustness of the estimator in section 5 does not rely on this assumption if the boundary data are piecewise polynomials .due to assumption [ assumption : bd ] , the tangential trace and tangential component of a vector field is the same space as those of a vector field on or respectively . with slightly abuse of notation , define now we define the weighted -norm for the value of any on boundary as : now thanks to the embedding results from , is equivalent to the unweighted which can be defined as : the fact that implies the following problem is well - posed : & { \mathfrak{a}}_{{\beta},\mu}({\bm{w}},{\bm{v } } ) = { { \left\langle { { \bm{g}}_d},{{\bm{v } } } \right\rangle } } , \quad \forall\ , { \bm{v}}\in { { p\!{\bm{h}}}^{1}({\omega},{\mathscr{p}})}\cap { \bm{x}}_0({\omega},{\alpha},{\gamma}_n ) , \end{aligned } \right.\ ] ] where the bilinear form is given as on this weighted divergence free subspace : with slightly abuse of notation , the zero extension of to the neumann boundary is denoted as itself .now for the trial function space and the test function space in problem are the same , letting leads to together with their tangential traces vanish on the neumann boundary , this implies the extension is now letting . to prove the estimate, we first notice that the problem is a consistent variational formulation for the following pde : therefore , the energy norm of is for the second equality in the lemma , it is straightforward to verify that for any , with is from the above construction , the following identity holds the last equality follows from the fact that on and on . in this section we want to establish a trace inequality for the tangential component space of . for any , consider the tangential component space defined in that contains all the tangential components of on the neumann boundary and zero on the dirichlet boundary . first we notice that which is the dual space of . for any , there exists such that and by the integration by parts formula from and cauchy - schwarz inequality , we have hence by definition the lemma follows .now , to show the validity of the theorem , it suffices to prove that problem has a unique solution satisfying the following a priori estimate to this end , for any , we have from trace lemma [ lemma : trace ] which , together with the cauchy - schwarz inequality , implies & \leq & \left({\left\|{{\beta}^{-1/2}{\bm{f}}}\right\|}+ { \left\|{{\bm{g}}_{_n}}\right\|}_{1/2,{\beta},\mu,{\gamma}_n}\right ) { \{\mathopen{|\mkern-2.5mu|\mkern-2.5mu| } s \mathclose{|\mkern-2.5mu|\mkern-2.5mu| } } { \mathopen{|\mkern-2.5mu|\mkern-2.5mu| } } \mathclose{|\mkern-2.5mu|\mkern-2.5mu| } } { { \bm{v}}}_{\mu,\beta } .\end{aligned}\ ] ] by the lax - milgram lemma , has a unique solution . taking in, we have dividing on the both sides of the above inequality yields .this completes the proof of the theorem . | in this paper , we introduce a novel a posteriori error estimator for the conforming finite element approximation to the problem with inhomogeneous media and with the right - hand side only in . the estimator is of the recovery type . independent with the current approximation to the primary variable ( the electric field ) , an auxiliary variable ( the magnetizing field ) is recovered in parallel by solving a similar problem . an alternate way of recovery is presented as well by localizing the error flux . the estimator is then defined as the sum of the modified element residual and the residual of the constitutive equation defining the auxiliary variable . it is proved that the estimator is approximately equal to the true error in the energy norm without the quasi - monotonicity assumption . finally , we present numerical results for several interface problems . [ section ] [ theorem]assumption [ theorem]remark [ theorem]definition [ theorem]lemma [ theorem]proposition |
orthogonal transform such as discrete wavelet transform is an important tool in statistical signal processing and analysis .especially , wavelet denoising is a popular application of discrete wavelet transform . in wavelet denoising, noisy signal is transformed into wavelet domain in which wavelet coefficients are obtained . by applying a thresholding method ,noise - related parts of coefficients are removed in a sense ; e.g. some of coefficients are set to zero .the inverse wavelet transform of the modified coefficients yields a denoised signal .the most popular and simple methods of thresholding is hard and soft - thresholding in .both thresholding methods have a parameter . in hard - thresholding method , the parameter works purely as a threshold level ; i.e. coefficients less than the parameter value are removed and un - removed coefficients are harmless . on the other hand , in soft - thresholding ,the parameter works as a threshold level as in hard - thresholding and simultaneously as an amount of shrinkage for un - removed components .coefficients less than the parameter value are removed and un - removed coefficients are shrunk toward zero by the parameter . for a better denoising performance , we need to determine an optimal value of the parameter .for example , in hard - thresholding , if the parameter value is too large then most of coefficients are removed even when those are significant .this results in an excess smoothing that yields a large bias between estimated output and target function output .on the other hand , if the parameter value is too small then most of coefficients are un - removed even when those are not significant .this results in a large variance of output estimate and , thus useless for denoising .a problem of choice of an optimal parameter value is often referred as a model selection problem .there are several model selection methods under thresholding . has proposed universal hard and soft - thresholding in which a theoretically significant constant value is employed as a parameter value .also , has derived a criterion for determining an optimal parameter value of soft - thresholding by applying stein s lemma . the soft - thresholding method with this criterionis called as sure ( stein s unbiased risk estimator ) shrink in .unfortunately , there is no such a theoretically supported criterion for hard - thresholding while modified cross validation approaches have been proposed .we focus on a soft - thresholding method in this paper . as previously mentioned , soft - thresholding is a combination of hard - thresholding and shrinkage in which both of threshold level and amount of shrinkage are simultaneously controlled by a single parameter .the parameter is a threshold level for removing un - necessary components and is also an amount of shift by which estimators of coefficients of un - removed components are shrunk toward to zero .if the parameter value is large then threshold level is large .therefore , the number of un - removed components is small .however , at the same time , the amount of shrinkage is automatically large .this can be an excess shrinkage amount which may yields a large bias of output estimate in representing a target function .this may cause a high prediction error at a relatively small model even when it can represent a target function ; i.e. even when it can obtain a sparse representation .therefore , the number of un - removed components in soft - thresholding tends to be large if we choose the parameter value based on a substitution of prediction error such as sure and cross - validation error .this is an inevitable problem of soft - thresholding , which is brought about by an introduction of a single parameter for controlling both of threshold level and amount of shrinkage simultaneously .note that , in the implementation of thresholding methods for wavelet denoising in , thresholding is recommended to apply only to detail coefficients .this heuristics may be actually valid to avoid the problem mentioned here .on the other hand , in machine learning and statistics , there are several model selection methods by using regularization , in which coefficient estimators are obtained by minimizing a regularized cost that consists of error term plus regularization term .a regularization method has a parameter that is multiplied by regularizer in the regularization term and determines a balance between error and regularization .lasso ( least absolute shrinkage and selection operator ) is a very popular regularization method for variable selection .it employs sum of absolute values of coefficients as a regularizer ; i.e. norm of a coefficient vector .lasso is known to be useful for obtaining a sparse representation of a target function ; i.e. the number of components for representing a target function is very small . in lasso ,extra components are automatically removed by setting their coefficients to zero .this property is clearly understood when it applied to orthogonal regression problems . in this case, lasso reduces to a soft - thresholding method in which a parameter of soft - thresholding is a regularization parameter divided by 2 .hence , a sparseness obtained by lasso comes from a sof - thresholding property . and , thus , lasso encounters the above mentioned problem of soft - thresholding .this dilemma between sparsity and prediction of lasso has already been discussed in and . has proposed scad ( smoothly clipped absolute deviation ) penalty which is a nonlinear modification of penalty . has proposed adaptive lasso that employs weighted penalties .an penalty term is modified by different ways ( functions ) in scad and adaptive lasso while shrinkage is suppressed for large values of estimators in both methods .this may reduce an excess shrinkage at a relatively small model .especially , in case of orthogonal regression , weights of adaptive lasso are effective for directly and adaptively reducing a shrinkage amount that is represented as a shift in soft - thresholding . in these methods ,cross validation is used as a model selection method for choosing parameter values such as a regularization parameter .unfortunately , usual cross validation can not be used in orthogonal regression unless it is heuristically modified as in . in this paper, we introduce a scaling of soft - thresholding estimators ; i.e. a soft - thresholding estimator is multiplied by a scaling parameter . unlike adaptive lasso , introduction of scaling is intended to control threshold level and amount of shrinkage independently .it is thus a direct solution for a problem of parametrization of soft - thresholding .if the scaling parameter value is less than one then it works as shrinkage of soft - thresholding estimator . for an orthogonal regression problem, this is equivalent to elastic net in machine learning .however , the scaling parameter can be larger than one by which the above mentioned excess shrinkage in soft - thresholding is expected to be relaxed ; i.e. scaling expands a shrinkage estimator obtained by soft - thresholding . especially in this paper , we propose a component - wise and data - dependent scaling method ; i.e. scaling parameter value can be different for each coefficient and is calculated from data .we refer the proposed scaling as adaptive scaling . in this paper, we derive a risk under adaptive scaling and construct a model selection criterion as an unbiased risk estimate .therefore , our work establishes a denoising method in which a drawback of a naive soft - thresholding is improved by the introduction of adaptive scaling and an optimal model is automatically selected according to a derived criterion under the adaptive scaling .in section 2 , we state a setting of orthogonal non - parametric regression that includes a problem of wavelet denoising . in this section, we also give a naive soft - thresholding method and several related methods . in this paper ,especially , we employ a soft - thresholding method based on lars ( least angle regression) in these methods . in lars - based soft - thresholding , a model selection problem reduces to the determination of the number of un - removed components . in section 3, we define an adaptive scaling and derive a risk under lars - based soft - thresholding with the adaptive scaling .we then give a model selection criterion as an unbiased estimate of the risk .we here also consider the properties of risk curve and reveals the model selection property .the proofs of theorems in this section are included in appendix with some lemmas . in section 4 , the proposed adaptive scaling methodis examined for toy artificial problems including applications to wavelet denoising .section 5 is devoted to conclusions and future works .let and be input variables and an output variable , for which we have i.i.d .samples : , where .we assume that , , where are i.i.d additive noise sequence according to ; i.e. normal distribution with mean and variance . is a target function .we assume that are fixed below .we define , and , where denotes a matrix transpose .we then have and =\h ] be a signal . samples of is denoted by , , .we define .we assume that for a natural number .let and be approximation and detail coefficients at a level in discrete wavelet transform , where .we define in which we set for . by setting ,the decomposition algorithm with pre - determined wavelets calculates from by decreasing , where is a fixed level determined by user .this procedure can be written by where is an orthonormal matrix that is determined by coefficients of scaling and wavelet function ; e.g. see . on the other hand ,the reconstruction algorithm calculates from by increasing .this can be written by since is an orthonormal matrix .let be an operator on into such as a thresholding operator . in wavelet denoising , is processed by using and obtain .we then obtain a denoised signal by .note that , in applications , a simple and fast decomposition / reconstruction algorithm is used instead of the above matrix calculation ; e.g. see .we here compare the prediction accuracy and sparseness of lst - as to those of lst , lst - ssp and also universal soft - thresholding ( ust ) in .note that sure shrink of is almost equivalent to lst here . in an application of ust , a threshold level on the absolute values of coefficients at the levelis given by where is an estimate of noise variance .in wavelet denoising , the median absolute deviation ( mad ) is a standard robust estimate of noise variance .it is given by where , is the smallest scale wavelet coefficients that are heuristically known to be noise dominated components . for lst , lst - ssp and lst - as, we also employ this estimator in a model selection criterion that is an unbiased risk estimate .we choose `` heavisine '' and `` blocks '' given in as test signals .the former is almost smooth and the latter has many discontinuous points .additive noise has a normal distribution with mean and variance . as in , signals are rescaled so that signal - to - noise ratio is .the number of samples is .we set . in , in practical application , it is employed a heuristic method which applies soft - thresholding only for detail coefficients at a determined level .we do not obey this heuristics and apply soft - thresholding to all coefficients in orthogonal transformation for a fair comparison .this is because the choice of a level at which thresholding applies largely depends on the performance as in and there is no systematic choice of such level .we employ the orthogonal daubechies wavelet with wavelet / scaling coefficients . for given samples ,we apply lst , lst - ssp , lst - as and ust , in which the maximum number of un - removed components is set to ; i.e. the maximum value of to be examined .we then calculate the mean squared error between true signal outputs and estimated outputs on the sampling points as an approximation of risk . for lst , lst - ssp and lst - as , the mean squared error and risk estimate are obtained at each .for ust , the number of un - removed components and risk value at a selected size are obtained .we repeat this procedure times .we show averages of ( approximated ) risk and risk estimate of lst , lst - ssp and lst - as in figure [ fig : wl - risk - curve - heavisine ] for `` heavisine '' and figure [ fig : wl - risk - curve - blocks ] for `` blocks '' respectively .we also show box plots of risk values at the selected number of components and those of the number of un - removed components in figure [ fig : wl - box - plot - heavisine ] for `` heavisine '' and figure [ fig : wl - box - plot - blocks ] for `` blocks '' respectively . by figure[ fig : wl - risk - curve - heavisine ] ( b ) and figure [ fig : wl - risk - curve - blocks ] ( b ) , risk estimate approximates risk well for both signals even when noise variance is estimated by mad . by figure [ fig :wl - risk - curve - heavisine ] and figure [ fig : wl - risk - curve - blocks ] , we can expect that a model estimated by lst - as shows a low risk and high sparsity compared to lst and lst - ssp ; i.e. this result leads to the same conclusions as in the previous numerical example . by comparing figure [ fig : wl - risk - curve - heavisine ] to figure [ fig : wl - risk - curve - blocks ] , the optimal number of components for `` blocks '' is larger than for `` heavisine '' , which is due to a degree of smoothness of signals . by figure[ fig : wl - box - plot - heavisine ] and figure [ fig : wl - box - plot - blocks ] , for both signals , lst - as outperforms the other methods in terms of prediction accuracy and sparsity , in which especially it shows a nice sparseness property .note that the worse results of lst and ust may be improved by applying a heuristics that thresholding methods are applied only to detail coefficients at a determined level while there is no systematic choice of the appropriate level .\(a ) averaged risk curves of lst , lst - ssp and lst - as .\(b ) averaged risk and risk estimate of lst - as .\(a ) risk value at the selected number of components .\(b ) the number of un - removed components .\(a ) averaged risk curve of lst , lst - ssp and lst - as .\(b ) averaged risk curve and risk estimate of lst - as .\(a ) risk value at the selected number of components .\(b ) the number of un - removed components .soft - thresholding is a key modeling tool in statistical signal processing such as wavelet denoising .it has a parameter that simultaneously controls threshold level and amount of shrinkage .this parametrization is possible to suffer from an excess shrinkage for un - removed valid components at a sparse representation ; i.e. there is a dilemma between prediction accuracy and sparsity . in this paper ,to overcome this problem , we introduced a component - wise and data - dependent scaling method for soft - thresholding estimators in a context of non - parametric orthogonal regression including discrete wavelet transform .we refer this method as an adaptive scaling method . here , we employed a lars - based soft - thresholding method ; i.e. a soft - thresholding method that is implemented by lars under an orthogonality condition . in lars - based soft - thresholding , a parameter value is selected by a data - dependent manner by which a model selection problem reduces to the determination of the number of un - removed components .we firstly derived a risk given by lar - based soft - thresholding estimate with our adaptive scaling .for determining an optimal number of un - removed components , we then gave a model selection criterion as an unbiased estimate of the risk .we also analyzed some properties of the risk curve and found that the model selection criterion is possible to select a model with low risk and high sparsity compared to a naive soft - thresholding .this was verified by a simple numerical experiment and an application to wavelet denoising . as a future work ,we need more application results . in doing this ,estimate of noise variance should be established in general applications while mad was found to be a good choice for a wavelet denoising application .although we gave scaling values in a top down manner in this paper , we may need to test the other forms of adaptive scaling values ; e.g. scaling values which are estimates of optimal values in some senses .moreover , development of adaptive scaling for non - orthogonal case may be expected for more general applications .00 abramovich , f. , b. yoav , 1996 .adaptive thresholding of wavelet coefficients .computational statistics & data analysis 22 , 351 - 361 .burrus , c.s . ,gopinath , r.a . ,guo , h. , 1998 .introduction to wavelets and wavelet transform .prentice hall .donoho , d.l . , johnstone , i.m . , 1994 .ideal spatial adaptation via wavelet shrinkage .biometrika 81 , 425 - 455 . donoho , d.l . ,johnstone , i.m . ,adapting to unknown smoothness via wavelet shrinkage .90 , 1200 - 1224 .efron , b. , hastie , t. , johnstone , i. , tibshirani , r. , 2004 .least angle regression .32 , 407 - 499 .fan , j. and li , r. , 2001 .variable selection via nonconcave penalized likelihood and its oracle properties .96 , 1348 - 1360 .hagiwara , k. , 2006 .on the expected prediction error of orthogonal regression with variable components .ieice trans .fundamentals e89-a , 3699 - 3709 .hagiwara , k. , 2014 .least angle regression in orthogonal case , in : proceedings of iconip 2014 , part ii , lncs 8835 , springer , 540 - 547 .hagiwara , k. , 2015 . on scaling of soft - thresholding estimator ,submitted to neurocomputing .hurvich c.m . andtsai c. 1998 .a crossvalidatory aic for hard wavelet thresholding in spatially adaptive function estimation .biometrika 85 , 701 - 710 .knight , k. , fu , w. , 2000 .asymptotics for lasso - type estimators .28 , 1356 - 1378 .leadbetter , m.r . ,lindgren , g. , rootzn , h. , 1983 .extremes , and related properties of random sequences and processes .springer - verlag .nason , g.p . , 1996 .wavelet shrinkage using cross - validation .j. r. statist .b 58 , 463 - 79 .resnick , s.i . , 1987 .extreme values , regular variation , and point processes .springer - verlag .stein , c. , 1981 .estimation of the mean of a multivariate normal distribution .ann . stat . 9 , 1135 - 1151 .tibshirani , r. , 1996 .regression shrinkage and selection via the lasso .j. r. stat .. methodol .58 , 267 - 288 .zhao , p. , yu , b. , 2006 . on model selection consistency of lasso .res . 7 , 2541 - 2563 .zou , h. , 2006 .the adaptive lasso and its oracle properties .assoc . 101 , 1418 - 1492 .zou , h. , hastie , t. , tibshirani , r. , 2007 . on the degrees of freedom of lasso .35 , 2173 - 2192 .zou , h. , hastie , t. , 2005 .regularization and variable selection via the elastic net .j. r. stat .67 , 301 - 320 .we here give some lemmas that is used for proving the main theorems .let be random variables .we define the largest value among by .[ lemma : max - chi2-e ] let be i.i.d .random variables from .we define . then , at each fixed , hold , where is the derivative of the gamma function at .( [ eq : lemma - max - chi2-e ] ) implies that =1.\end{aligned}\ ] ] by slightly modifying example 3 , pp.72 - 73 in , we can show that converges to the double exponential distribution .then , ( [ eq : lemma - max - chi2-e ] ) is a direct conclusion of proposition 2.1 ( iii ) in .[ lemma : prob - bound - mth - largest - chi2 ] let be i.i.d .random variables from . at each fixed , &=0\\ \label{eq : prob - bound - mth - largest - chi2-upper } \lim_{n\to\infty}\p\left[x_{(m)}>2\log n)\right]&=0\end{aligned}\ ] ] hold , where is an arbitrary positive constant .we denote the probability distribution function of by .the probability density function of is given by .we have .thus , we have as by applying and lhospital s rule . therefore , for a random variable , \sim 2f_1(x)\ ] ] holds for a sufficiently large . by ( [ eq : lemma - chi2-p - bound-1 ] ), we obtain & \le\sum_{i=1}^n\p\left[x_i>2\log n\right]\notag\\ & \sim 2nf_1\left(2\log n\right)\notag\\ & = \frac{1}{\pi}\frac{1}{\sqrt{\log n}}\to 0~(n\to\infty).\end{aligned}\ ] ] since for any , we have ( [ eq : prob - bound - mth - largest - chi2-upper ] ) . on the other hand , by ( [ eq : lemma - chi2-p - bound-1 ] ) , we have for a sufficiently large . since this goes to , we obtain ( [ eq : prob - bound - mth - largest - chi2-lower ] ) by theorem 2.2.1 in .[ lemma : cj > maxci ] for any and any , \le 2\pi^{-1/2}\rho^{-1/2}n^{-\rho}\ ] ] holds for a sufficiently large .we define .we obtain & \ge\p\left[\left[\tc_j^2>\tau_{n,\rho}\right ] \bigcap\left[\tau_{n,\rho}>\max_{i\in \ok^*}\tc_i^2\right]\right]\notag\\ & = 1-\p\left[\left[\tc_j^2\le\tau_{n,\rho}\right ] \bigcup\left[\tau_{n,\rho}\le\max_{i\in \ok^*}\tc_i^2\right]\right]\notag\\ & \ge 1-\p\left[\tc_j^2\le\tau_{n,\rho}\right]- \p\left[\max_{i\in \ok^*}\tc_i^2\ge \tau_{n,\rho}\right].\end{aligned}\ ] ] by the definition of , we have &=\p\left[|\tc_j|\le\sqrt{\tau_{n,\rho}}\right]\notag\\ & = \p\left[|\sqrt{n}\beta_j/\sigma+\oc_j|\le\sqrt{\tau_{n,\rho}}\right]\notag\\ & \le\p\left[\sqrt{n}|\beta_j|/\sigma-|\oc_j|\le \sqrt{\tau_{n,\rho}}\right]\notag\\ & = \p\left[|\oc_j|\ge\sqrt{n}|\beta_j|/\sigma-\sqrt{\tau_{n,\rho}}\right]\notag\\ & \le\p\left[|\oc_j|\ge\sqrt{\tau_{n,\rho}}\right]\notag\\ & = \p\left[\oc_j^2\ge\tau_{n,\rho}\right]\end{aligned}\ ] ] for a sufficiently large .note that this evaluation is not tight but is enough in this paper . since by the definition of , by ( [ eq : lemma - chi2-p - bound-1 ] ) and ( [ eq : cj > maxci-1 ] ), we have \le \pi^{-1/2}\rho^{-1/2}n^{-\rho}\ ] ] for a sufficiently large . on the other hand, holds for since holds for . by ( [ eq : lemma - chi2-p - bound-1 ] ), we thus have & \le\sum_{j\in\ok^*}\p[\tc_i^2\ge\tau_{n,\rho}]\notag\\ & \sim ( n - k^*)\pi^{-1/2}(\rho+1)^{-1/2}n^{-(\rho+1)}\notag\\ & \le\pi^{-1/2}\rho^{-1/2}n^{-\rho}\end{aligned}\ ] ] for a sufficiently large . by ( [ eq : cj > maxci-0 ] ) , ( [ eq : cj > maxci-2 ] ) and ( [ eq : cj > maxci-3 ] ) , we obtain ( [ eq : cj > maxci ] ) as desired .[ lemma : p - oen*-bound ] \le k^*\pi^{-1/2}\rho^{-1/2}n^{-\rho}\ ] ] holds for any and a sufficiently large .if does not occur then there exist such that .this implies that there exist and that satisfy .therefore , we have . by lemma[ lemma : cj > maxci ] , we then obtain ( [ eq : p - oen*-bound ] ) .[ lemma : e - cp1 ^ 2-ioe * ] holds for a fixed .we define and .we also define an event . by the cauchy - schwarz inequality, we have \notag\\ & \le\e[(\oc_{p_1}+(\beta_{p_1}/\sigma)\sqrt{n})^{2m}i_{\oe_n^*}]\notag\\ & \le\e[(\oc+(\obeta/\sigma)\sqrt{n})^{2m}i_{\oe_n^*}]\notag\\ & \le\e[(\oc+(\obeta/\sigma)\sqrt{n})^{2m}i_fi_{\oe_n^ * } ] + \e[(\oc+(\obeta/\sigma)\sqrt{n})^{2m}i_{\of}i_{\oe_n^*}]\notag\\ & \le 2^{2m}\e[\oc^{2m}i_fi_{\oe_n^ * } ] + 2^{2m}(\obeta/\sigma)^{2m}n^m\e[i_{\of}i_{\oe_n^*}]\notag\\ & \le 2^{2m}\e[(\oc^2)^{m}i_{\oe_n^ * } ] + 2^{2m}(\obeta/\sigma)^{2m}n^m\e[i_{\oe_n^*}]\notag\\ & \le 2^{2m}\sqrt{\e[(\oc^2)^{2m}]}\sqrt{\p[\oe_n^ * ] } + 2^{2m}(\obeta/\sigma)^{2m}n^m\p[\oe_n^*].\end{aligned}\ ] ] by lemma [ lemma : p - oen*-bound ] with , the second term of ( [ eq : e - cp1 ^ 2-ioe*-2 ] ) goes to zero as . since is the largest value among i.i.d . sequence with size , the first term of ( [ eq : e - cp1 ^ 2-ioe*-2 ] ) goes to zero as by lemma [ lemma : max - chi2-e ] and lemma [ lemma : p - oen*-bound ] with the above choice of .[ lemma : e - ttheta_k^2-bound ] if then \le ( 2\log n)^m\ ] ] holds for a fixed and sufficiently large . we can write =\e\left[\ttheta_k^{2m}i_{\oe_n^*}\right ] + \e\left[\ttheta_k^{2m}i_{e_n^*}\right].\ ] ] by lemma [ lemma : e - cp1 ^2-ioe * ] and the definition of , \le\e\left[\tc_{p_1}^{2m}i_{\oe_n^*}\right]\to 0~(n\to\infty).\ ] ] we define .if occurs then and is the largest value among i.i.d . random sequence with length .therefore , by lemma [ lemma : max - chi2-e ] , }{(2\log n)^m } \le\frac{\e\left[\tc^{2m}\right]}{(2\log n)^m}\to 1~(n\to\infty).\ ] ]we give the proofs of the main theorems below . for an , the risk is reformulated as where we used ( [ eq : dist - of - ecv ] ) at the third line and the orthogonality condition at the last line .the last term is often called the degree of freedom ; see e.g. .let be an -dimensional vector that is constructed by removing from .we define .although is a function of , we regard this as a function under a fixed and denote it by .let be the largest value in . by ( [ eq : component - wise - scaling ] ) , we have note here that is well - defined even when under the definition of in ( [ eq : component - wise - scaling ] ) .this is lipschitz continuous as a function of when is fixed .it is thus absolutely continuous . on the other hand , we denote expectation with respect to by .we have and , where denotes the determinant of a matrix .therefore , is always replaced with by change of variables .we also denote a conditional expectation with respect to given by .we define }(\tc_j|\tcv_{-j}) ] when ] . by the definition of in ( [ eq : component - wise - scaling ] ), we then have \notag\\ & = \p\left[\left\{\ealpha_j>1+\epsilon_{j , n}\right\}\bigcap \oe_0\right]\notag\\ & = \p\left[\left\{\frac{\ttheta_k}{|\tc_j|}>\epsilon_{j , n}\right\}\bigcap \oe_0\right]\notag\\ & \le\p\left[\frac{\ttheta_k}{\sqrt{2\log n } } -\frac{|\tc_j|}{(|\beta_j|+\delta)\sqrt{n}}>0\right]\notag\\ & \le \p\left[\ttheta_k>\sqrt{2\log n}\right ] + \p\left[|\tc_j|<(|\beta_j|+\delta)\sqrt{n}\right].\end{aligned}\ ] ] for the first term of ( [ eq : ealpha_j - bound - in*-2 ] ) , we have \notag\\ & = \p\left[\ttheta_k^2>2\log n\right]\notag\\ & = \p\left[\ttheta_k^2>2\log n|e_n^*\right]\p\left[e_n^*\right]+ \p\left[\ttheta_k^2>2\log n|\oe_n^*\right]\p\left[\oe_n^*\right]\notag\\ & \le\p\left[\ttheta_k^2>2\log n|e_n^*\right]+ \p\left[\oe_n^*\right].\end{aligned}\ ] ] the second term of ( [ eq : ealpha_j - bound - in*-3 ] ) goes to zero as by lemma [ lemma : p - oen*-bound ] . if occurs then is the largest value among i.i.d . sequence with size .therefore , by ( [ eq : prob - bound - mth - largest - chi2-upper ] ) in lemma [ lemma : prob - bound - mth - largest - chi2 ] , the first term of ( [ eq : ealpha_j - bound - in*-3 ] ) goes to zero as .thus , the first term of ( [ eq : ealpha_j - bound - in*-2 ] ) goes to zero as . recall that for , where .then , for the second term of ( [ eq : ealpha_j - bound - in*-2 ] ) , we obtain & = \p\left[|\sqrt{n}\beta_j+\oc_j|<(|\beta_j|+\delta)\sqrt{n}\right]\notag\\ & \le\p\left[\sqrt{n}|\beta_j|-|\oc_j|<(|\beta_j|+\delta)\sqrt{n}\right]\notag\\ & = \p\left[|\oc_j|>\delta\sqrt{n}\right]\to 0~(n\to\infty).\end{aligned}\ ] ] since holds , we obtain ( [ eq : ealpha_j - lower - bound - k*-0 ] ) as desired .on the other hand , we consider ( [ eq : ealpha_j - lower - bound - ok * ] ) . for any , we have & = \p\left[\{\ealpha_j\le 2-\epsilon\}\bigcap\oe_0\right]\notag\\ & \le\p\left[\ttheta_k\le ( 1-\epsilon)|\tc_j|\right]\notag\\ & \le\p\left[\ttheta_k\le\delta_n\right ] + \p\left[(1-\epsilon)|\tc_j|>\delta_n\right].\end{aligned}\ ] ] for the first term of ( [ eq : ealpha_j - bound - notin*-1 ] ) , we have & = \p\left[\ttheta_k^2\le\delta_n^2\right]\notag\\ & = \p\left[\ttheta_k^2\le\delta_n^2|e_n^*\right]\p[e_n^*]+ \p\left[\ttheta_k\le\delta_n^2|\oe_n^*\right]\p[\oe_n^*]\notag\\ & \le\p\left[\ttheta_k^2\le\delta_n^2|e_n^*\right]+\p[\oe_n^*].\end{aligned}\ ] ] by lemma [ lemma : p - oen*-bound ] , the second term of ( [ eq : ealpha_j - bound - notin*-2 ] ) goes to zero as .we set .if occurs then is the largest value among i.i.d . sequence with size .therefore , by ( [ eq : prob - bound - mth - largest - chi2-lower ] ) in lemma [ lemma : prob - bound - mth - largest - chi2 ] and the choice of , the first term of ( [ eq : ealpha_j - bound - notin*-2 ] ) goes to zero as .we define . for the second term of ( [ eq : ealpha_j - bound - notin*-1 ] ) , we have \le\p\left[\tc^2>2\log n\right]\end{aligned}\ ] ] since .here , is the largest value among i.i.d . sequence with size by the definitions of and .hence , by ( [ eq : prob - bound - mth - largest - chi2-upper ] ) in lemma [ lemma : prob - bound - mth - largest - chi2 ] , ( [ eq : ealpha_j - bound - notin*-3 ] ) goes to zero as . by( [ eq : risk - for - lst ] ) and ( [ eq : theorem - rnk - evalpha ] ) , we have -\e\left[(\ealpha_{p_j}-1)^2\ttheta_{k^*}^2\right ] -2\e\left[(\ealpha_{p_j}-1)^2\right]\right)\notag\\ & = \frac{\sigma^22\log n}{n } \sum_{j=1}^{k^*}\left(\frac{\e\left[\ttheta_{k^*}^2\right]}{2\log n } -\frac{\e\left[(\ealpha_{p_j}-1)^2\ttheta_{k^*}^2\right]}{2\log n } -2\frac{\e\left[(\ealpha_{p_j}-1)^2\right]}{2\log n}\right)\end{aligned}\ ] ] through a simple calculation .we evaluate the three terms in the sum of ( [ eq : theorem - r(1)-r(evalpha)-1 ] ) .we first have /(2\log n)=1\ ] ] by lemma [ lemma : e - ttheta_k^2-bound ] .hence , the proof is completed by showing that the second and third terms of ( [ eq : theorem - r(1)-r(evalpha)-1 ] ) goes to zero as .we define and , where is defined in ( [ eq : def - epsilon_n ] ) .we have for by the definition of . and , if occurs then for any .we then obtain & = \e\left[(\ealpha_{p_j}-1)^2i_{g_j\bigcap e_n^*}\right ] + \e\left[(\ealpha_{p_j}-1)^2i_{\og_j\bigcup\oe_n^*}\right]\notag\\ & \le\e\left[i_{g_j\bigcap\oe_n^*}\right]+\epsilon_n^2+\p[\oe_n^*]\notag\\ & \le\sum_{l\in k^*}\p\left[(\ealpha_l-1)^2>\epsilon_n^2\right]+\epsilon_n^2+\p[\oe_n^*].\end{aligned}\ ] ] ( [ eq : theorem - r(1)-r(evalpha)-3 ] ) goes to zero as by ( [ eq : ealpha_j - lower - bound - k*-0 ] ) in theorem [ theorem : ealpha_j - convergence ] , the definition of and lemma [ lemma : p - oen*-bound ] .we also have \notag\\ & = \e\left[(\ealpha_{p_j}-1)^2\ttheta_{k^*}^2i_{g_j\bigcap e_n^*}\right ] + \e\left[(\ealpha_{p_j}-1)^2\ttheta_{k^*}^2i_{\og_j\bigcup\oe_n^*}\right]\notag\\ & \le\e\left[\ttheta_{k^*}^2i_{g_j}i_{e_n^*}\right ] + \epsilon_n^2\e[\ttheta_{k^*}^2]+\e[\ttheta_{k^*}^2i_{\oe_n^*}].\end{aligned}\ ] ] for the first term of ( [ eq : theorem - r(1)-r(evalpha)-4 ] ) , by the cauchy - schwarz inequality , we have }{2\log n } & \le\frac{\sqrt{\e\left[\ttheta_{k^*}^4\right]}}{2\log n } \sqrt{\e\left[i_{g_j}i_{e_n^*}\right]}\notag\\ & \le\frac{\sqrt{\e\left[\ttheta_{k^*}^4\right]}}{2\log n } \sqrt{\sum_{l\in k^*}\p\left[(\ealpha_l-1)^2>\epsilon_n^2\right]}.\end{aligned}\ ] ] ( [ eq : theorem - r(1)-r(evalpha)-5 ] ) goes to zero as by lemma [ lemma : e - ttheta_k^2-bound ] and ( [ eq : ealpha_j - lower - bound - k*-0 ] ) in theorem [ theorem : ealpha_j - convergence ] .the second term of ( [ eq : theorem - r(1)-r(evalpha)-4 ] ) goes to zero as by lemma [ lemma : e - ttheta_k^2-bound ] and the definition of . by the cauchy - schwarz inequality, the third term of ( [ eq : theorem - r(1)-r(evalpha)-4 ] ) is bounded above by }\sqrt{\p[\oe_n^*]}$ ] .this goes to zero as by lemma [ lemma : e - ttheta_k^2-bound ] and lemma [ lemma : p - oen*-bound ] .we thus obtain ( [ eq : theorem - r(1)-r(evalpha ) ] ) as desired . | soft - thresholding is a sparse modeling method that is typically applied to wavelet denoising in statistical signal processing and analysis . it has a single parameter that controls a threshold level on wavelet coefficients and , simultaneously , amount of shrinkage for coefficients of un - removed components . this parametrization is possible to cause excess shrinkage , thus , estimation bias at a sparse representation ; i.e. there is a dilemma between sparsity and prediction accuracy . to relax this problem , we considered to introduce positive scaling on soft - thresholding estimator , by which threshold level and amount of shrinkage are independently controlled . especially , in this paper , we proposed component - wise and data - dependent scaling in a setting of non - parametric orthogonal regression problem including discrete wavelet transform . we call our scaling method adaptive scaling . we here employed soft - thresholding method based on lars(least angle regression ) , by which the model selection problem reduces to the determination of the number of un - removed components . we derived a risk under lars - based soft - thresholding with the proposed adaptive scaling and established a model selection criterion as an unbiased estimate of the risk . we also analyzed some properties of the risk curve and found that the model selection criterion is possible to select a model with low risk and high sparsity compared to a naive soft - thresholding method . this theoretical speculation was verified by a simple numerical experiment and an application to wavelet denoising . non - parametric orthogonal regression , soft - thresholding , shrinkage , adaptive scaling , wavelet denoising |
due to its vector - space preserving property , random linear network coding can be viewed as transmitting subspaces over an operator channel . as such , error control for random linear network codingcan be modeled as a coding problem , where codewords are subspaces and the distance is measured by either the subspace distance or the injection metric .codes in the projective space , referred to as subspace codes henceforth , and codes in the grassmannian , referred to as constant - dimension codes ( cdcs ) henceforth , have been both investigated for error control in random linear network coding .using cdcs is sometimes advantageous since the fixed dimension of cdcs simplifies the network protocol somewhat .the construction and properties of cdcs thus have attracted a lot of attention .different constructions of cdcs have been proposed .bounds on cdcs based on packing properties are investigated ( see , for example , ) , and the covering properties of cdcs are investigated in .the construction and properties of subspace codes have received less consideration , and previous works on subspace codes ( see , for example , ) have focused on the packing properties . in , bounds on the maximum cardinality of a subspace code with the subspace metric , notably the counterpart of the gilbert bound , are derived .another bound relating the maximum cardinality of cdcs to that of subspace codes is given in .bounds and constructions of subspace codes are also investigated in . despite the previous works , two significant problems remain open .first , despite the aforementioned advantage of cdcs , what is the rate loss of cdcs as opposed to subspace codes of the same minimum distance and hence error correction capability ?since random linear network coding achieves multicast capacity with probability exponentially approaching 1 with the length of the code , the asymptotic rates of subspace codes and asymptotic rate loss of cdcs are both significant .the second problem involves the two metrics that have been introduced for subspace codes : what is the difference between the two metrics proposed for subspace codes and cdcs beyond those discussed in ? note that the two questions are somewhat related , since the first question is applicable for both metrics .the answers to these questions are significant to the code design for error control in random linear network coding . aiming to answer these two questions , our work in this paper focuses on the packing and covering properties of subspace codes .packing and covering properties not only are interesting in their own right as fundamental geometric properties , also are significant for various practical purposes . first , our work is motivated by their significance to design and decoding of subspace codes .since a code can be viewed as a packing of its ambient space , the significance of packing properties is clear .in contrast , the importance of covering properties is more subtle and deserves more explanation .for example , a class of nearly optimal cdcs , referred to as liftings of rank metric codes , have covering radii no less than their minimum distance and thus are not optimal cdcs .this example shows how a covering property is relevant to the design of subspace codes .the covering radius also characterizes the decoding performance of a code , since it is the maximum weight of a decodable error by minimum distance decoding and also has applications to decoding with erasures .second , covering properties are also important for other reasons . for example , covering properties are important for the security of keystreams against cryptanalytic attacks .our main contributions of this paper are that for both metrics , we first determine some fundamental geometric properties of the projective space , and then use these properties to derive bounds and to determine the asymptotic rates of subspace codes based on packing and covering .our results provide some answers to both open problems above .first , our results show that for both metrics optimal packing cdcs are optimal packing subspace codes up to a scalar if and only if their dimension is half of their length ( up to rounding ) , which implies that in this case cdcs suffer from a limited rate loss as opposed to subspace codes with the same minimum distance .furthermore , when the asymptotic rate of subspace codes is fixed , the relative subspace distance of optimal subspace codes is twice as much as the relative injection distance .second , our results illustrate the difference between the two metrics from a geometric perspective . above all, the projective space has different geometric properties under the two metrics .the different geometric properties further result in different asymptotic rates of covering codes with the two metrics .with the injection metric , optimal covering cdcs can be used to construct asymptotically optimal covering subspace codes .however , with the subspace metric , this does not hold . to the best of our knowledge ,our results on the geometric properties of the projective space are novel , and our investigation of covering properties of subspace codes is the first one in the literature . note that our investigation of covering properties differs from the study in : while how cdcs cover the grassmannian was investigated in , we consider how subspace codes cover the whole projective space in this paper .our investigation of packing properties leads to tighter bounds than the gilbert bound in , and our relation between the optimal cardinalities of subspace codes and cdcs is also more precise than that in .our asymptotic rates based on packing properties also appear to be novel .the rest of the paper is organized as follows .section [ sec : preliminaries ] reviews necessary background on subspace codes , cdcs , and related concepts . in section [ sec : subspace ] , we investigate the packing and covering properties of subspace codes with the subspace metric . in section [ sec : injection ] , we study the packing and covering properties of subspace codes with the injection metric .finally , section [ sec : conclusion ] summarizes our results and provides future work directions .we refer to the set of all subspaces of with dimension as the grassmannian of dimension and denote it as ; we refer to as the projective space .we have , where is the gaussian binomial .a very instrumental result about the gaussian binomial is that for all : where represents the ratio of non - singular matrices in as tends to infinity . by definition , , where is the euler function .furthermore , by the pentagonal number theorem , . finally , we also have , where is the partition number of . for , both the _ subspace metric _* ( 3 ) ) and _ injection metric _1 ) are metrics over . for all , and if and only if , and if andonly if or . a _ subspace code _ is a nonempty subset of .the minimum subspace ( respectively , injection ) distance of a subspace code is the minimum subspace ( respectively , injection ) distance over all pairs of distinct codewords .a subset of is called a constant - dimension code ( cdc ) .a cdc is thus a subspace code whose codewords have the same dimension . since for cdcs , we focus on the injection metric when considering cdcs .we denote the * maximum * cardinality of a cdc in with * minimum injection distance * as .we have , and it is shown for and , the lower bound on in ( [ eq : bounds_ac ] ) is implicit from the code construction in , and the upper bounds on in ( [ eq : bounds_ac ] ) are from .thus , cdcs in ( ) with minimum injection distance and cardinality proposed in are optimal up to a scalar ; we refer to these cdcs as kk codes henceforth .the covering radius in of a cdc is defined as .we also denote the * minimum * cardinality of a cdc with covering radius in as .it was shown in that is on the order of , and an asymptotically optimal construction of covering cdcs is designed in ( * ? ? ?* proposition 12 ) .we first investigate the properties of balls with subspace radii in , which will be instrumental in our study of packing and covering properties of subspace codes with the subspace metric .we first derive bounds on below . in order to simplify notations ,we denote , which is related to the jacobi theta function by ] for , , , and , illustrates this observation . as a function of the dimension of its center and of its radius ] we are interested in packing subspace codes used with the subspace metric .the maximum cardinality of a code in with minimum subspace distance is denoted as . since , we assume henceforth. we can relate to .first , we remark that for all , , and .the claim is obvious for , and easily shown for by using ( [ eq : gaussian ] ) .we also remark that . for all , we denote the maximum cardinality of a code with minimum subspace distance and codewords having dimensions in as . for , .proposition [ prop : as_rd ] below compares to and shows that is a good approximate of .[ prop : as_rd ] for , and for , . also , we have .let be a code in with minimum subspace distance . for , we have ; therefore there is at most one codeword with dimension less than .similarly , , therefore there is at most one codeword with dimension greater than .thus for and .since the code has minimum subspace distance , we obtain .a cdc in with minimum injection distance has minimum subspace distance , and hence for all .also , the codewords with dimension in a code with minimum subspace distance form a cdc in with minimum injection distance at least , and hence .we compare our lower bound on in proposition [ prop : as_rd ] to the gilbert bound in ( * ? ? ?* theorem 5 ) .the latter shows that , where the average volume is taken over all subspaces in . using the bounds on in proposition [ prop : bound_vs ] , it can be shown that this lower bound is at most . on the other hand , proposition [ prop : as_rd ] and ( [ eq : bounds_ac ] ) yield .the ratio between our lower bound and the gilbert bound is hence at least for all and .therefore , our lower bound in proposition [ prop : as_rd ] is tighter than the gilbert bound in ( * ? ? ?* theorem 5 ) .the lower bound in proposition [ prop : as_rd ] is further tightened below by considering the union of cdcs in different grassmannians .[ prop : lower_as ] for all , , and , we have , where . for ,let be a cdc in with minimum subspace distance and cardinality and let .we have , and we now prove that has minimum subspace distance at least by considering two distinct codewords and .first , if , then ; second , if and , then by the minimum distance of . in order to characterize the rate loss by using cdcs instead of subspace codes , we now compare the cardinalities of optimal subspace codes and optimal cdcs with the same minimum subspace distance .note that the bounds on the cardinalities of optimal cdcs in ( [ eq : bounds_ac ] ) assume the injection metric for cdc .when is even , a cdc with a minimum subspace distance has a minimum injection distance .when is odd , a cdc with a minimum subspace distance has a minimum injection distance .thus , a cdc has a minimum subspace distance at least if and only if it has minimum injection distance at least .hence , we compare and in proposition [ prop : as_v_ac ] below .[ prop : as_v_ac ] _ ( comparison between optimal subspace codes and cdcs in the subspace metric ) ._ for and , by ( [ eq : gaussian ] ) , proposition [ prop : as_rd ] , and ( [ eq : bounds_ac ] ) , we have .also , proposition [ prop : as_rd ] and ( [ eq : gaussian ] ) also lead to where .since and , we obtain where ( [ eq : as1 ] ) follows from ( [ eq : bounds_ac ] ) .we now compare the relation between and in proposition [ prop : as_v_ac ] to the one determined in ( * ? ? ?* theorem 5 ) .the latter only provides the following lower bound on : .the singleton bound on cdcs indicates that , which in turn satisfies by ( [ eq : gaussian ] ) .hence the lower bound on in ( * ? ? ?* theorem 5 ) is at most .the ratio between our lower bound in proposition [ prop : as_v_ac ] and the lower bound in ( * ? ? ?* theorem 5 ) is at least , and thus our lower bound in proposition [ prop : as_v_ac ] is tighter than the bound in ( * ? ? ?* theorem 5 ) for all cases .the bounds in proposition [ prop : as_v_ac ] help us determine the asymptotic behavior of .we first define the rate of a subspace code as and where is the minimum subspace distance of a code , the asymptotic rate of a subspace code and of a cdc of given dimension can be easily determined .[ prop : as ] _( asymptotic rate of packing subspace codes in the subspace metric ) ._ for , . for or , ; for , ; for , . first , ( [ eq : bounds_ac ] ) and lemma [ lemma : bounds_e ] yield for .since , we also obtain for .second , ( [ eq : as_v_ac ] ) for and ( [ eq : bounds_ac ] ) yield .propositions [ prop : as_v_ac ] and [ prop : as ] provide several important insights .first , proposition [ prop : as_v_ac ] indicates that optimal cdcs with dimension being half of the block length up to rounding ( and ) are optimal subspace codes up to a scalar . in this case, the optimal cdcs have a limited rate loss as opposed to optimal subspace codes with the same error correction capability .when , the rate loss suffered by optimal cdcs increases with .proposition [ prop : as ] indicates that using cdcs with dimension leads to a decrease in rate on the order of , where . since the rate loss increases with , using a cdc with a dimension further from leads to a larger rate loss .the conclusion above can be explained from a combinatorial perspective as well .when or , by lemma [ lemma : bounds_e ] , is the same as up to scalar .thus it is not surprising that the optimal packings in are the same as those in up to scalar .we also comment that the asymptotic rates in proposition [ prop : as ] for subspace codes come from singleton bounds .the asymptotic rate is achieved by kk codes .the asymptotic rate is similar to that for rank metric codes .this can be explained by the fact that the asymptotic rate is also achieved by kk codes when , whose cardinalities are equal to those of optimal rank metric codes . in table[ table : comparison ] we compare the bounds on derived in this paper with each other and with existing bounds in the literature , for , , and ranging from to .we consider the lower bound in proposition [ prop : as_rd ] , its refinement in proposition [ prop : lower_as ] , and the lower bounds in and ( * ? ? ?* theorem 5 ) described above , and the upper bound comes from proposition [ prop : as_rd ] .note that proposition [ prop : as_v_ac ] is not included in the comparison since its purpose is to compare the cardinalities of optimal subspace codes and optimal cdcs with the same minimum subspace distance .since bounds in propositions [ prop : as_rd ] and [ prop : lower_as ] and ( * ? ? ?* theorem 5 ) depend on cardinalities of either related cdcs or optimal cdcs , we use the cardinalities of cdcs with dimension proposed in and as lower bounds on and the upper bound in on to derive the numbers in table [ table : comparison ] .for example , the lower bound of proposition [ prop : as_rd ] is simply given by the construction in when , and given by the construction in for other values of .table [ table : comparison ] illustrates our lower bounds in propositions [ prop : as_rd ] and [ prop : lower_as ] are tighter than those in and ( * ? ? ?* theorem 5 ) .the cardinalities of cdcs with dimension in and , displayed as the lower bound in proposition [ prop : as_rd ] , are quite close to the lower bound in proposition [ prop : lower_as ] , supporting our conclusion that the rate loss suffered by properly designed cdcs is smaller when the dimension is close to .also , the lower and upper bounds in proposition [ prop : as_rd ] depend on , and hence the bounds for and are the same .finally , the tightness of the bounds improves as the minimum distance of the code increases , leading to very tight bounds for . [ cols="^,^,^,^,^,^",options="header " , ] [ prop : km ] _ ( greedy bound for covering codes in the injection metric ) ._ for all , , and , $ ] .we finally determine the asymptotic behavior of by using the asymptotic rate . according to proposition [ prop : bound_vm ], the volume of a ball with injection radius is constant up to a scalar .the consequence of this geometric result is that the greedy algorithm used to prove proposition [ prop : km ] above will produce asymptotically optimal covering codes in the injection metric .however , since the volume of balls in the subspace metric does depend on the center ( see proposition [ prop : bound_vs ] ) , a direct application of the greedy algorithm for the subspace metric does not necessarily produce asymptotically optimal covering codes in the subspace metric .[ prop : km ] _( asymptotic rate of covering subspace code in the injection metric ) ._ for , . for , . by proposition [ prop : km_2rho >n ] , for . we have by lemma [ lemma : bounds_e ] and proposition [ prop : bound_vm ] .this asymptotically becomes for .similarly , proposition [ prop : km ] , lemma [ lemma : bounds_e ] , and proposition [ prop : bound_vm ] yield \end{aligned}\ ] ] which asymptotically becomes for .the proof of proposition [ prop : km ] indicates that the minimum cardinality of a covering subspace code with the injection metric is on the order of .a covering subspace code is easily obtained by taking the union of optimal covering cdcs for all constant dimensions , leading to a code with cardinality . by ,the cardinality of the union is on the order of .thus , a union of optimal covering cdcs ( in their respective grassmannians ) results in asymptotically optimal covering subspace codes with the injection metric .propositions [ prop : ks ] and [ prop : km ] as well as their implications illustrate the differences between the subspace and injection metrics .first , the asymptotic rates of optimal covering subspace codes with the two metrics are different .second , a union of optimal covering cdcs ( in their respective grassmannians ) results in asymptotically optimal covering subspace codes with the injection metric only , not with the subspace metric .these differences can be attributed to the different behaviors of the volume of a ball with subspace and injection radius .although , proposition [ prop : bound_vs ] indicates that decreases with ( ) , while according to proposition [ prop : bound_vm ] , remains asymptotically constant .hence , for , the balls with subspace radius centered at a subspace with dimension have significantly smaller volumes than their counterparts with an injection radius .therefore , covering the subspaces with dimension requires more balls with subspace radius than balls with injection radius , which explains the different rates for and . also , since the volume of a ball with subspace radius reaches its minimum for and has the largest cardinality among all grassmannians , using covering cdcs of dimension to cover is not advantageous .thus , a union of covering cdcs does not lead to an asymptotically optimal covering subspace code in the subspace metric .in this paper , we derive packing and covering properties of subspace codes for the subspace and the injection metrics .we determine the asymptotic rates of packing and covering codes for both metrics , compare the performance of constant - dimension codes to that of general subspace codes , and provide constructions or semi - constructive bounds of nearly optimal codes in all four cases .these results are briefly summarized in table [ table : results ] . despite these results ,some open problems remain for subspace codes .first of all , our bounds on the volumes of balls derived in lemma [ lemma : bounds_e ] and propositions [ prop : bound_vs ] and [ prop : bound_vm ] may be tightened .although the ratio between the upper and lower bounds is a function of the field size which tends to as tends to infinity , it is unknown whether this ratio is the smallest that can be established .this issue also applies to the bounds on packing subspace codes in propositions [ prop : as_v_ac ] and [ prop : am_v_ac ] , where the ratios between upper and lower bounds are similar functions of .also , we only considered balls with radii up to , as only this case was useful for our derivations ; the case where the radius is above remains unexplored .second , the bounds on covering codes in both the subspace and the injection metrics derived in this paper are only asymptotically optimal .it remains unknown whether any of these bounds is tight up to a scalar .third , the design of packing and covering subspace codes is an important topic for future work .this is especially the case for covering codes in the subspace metric , as no asymptotically optimal construction is known so far .finally , the aim of this paper was to derive simple bounds on subspace codes which are good for all parameter values , especially large values . on the other hand , a wealth of ad hoc bounds and heuristics can be used to tighten our results for small parameter values .the authors are grateful to the anonymous reviewers and the associate editor dr .mario blaum for their constructive comments , which have helped to improve this paper .we now prove the bounds on for . by definition, is a double summation of exponential terms .the main idea of the proof is to determine the largest term in the summation : this not only gives a good lower bound , but the whole summation can also be upper bounded by that term times a constant .first , by lemma [ lemma : ns ] , , where satisfies . thus by ( [ eq : gaussian ] ) , where .hence , , where .since is maximized for , we need to consider the following three cases .* case i : .we have and hence is maximized for : .thus , and it is easy to show that since .* case ii : .we have and hence is maximized for : .it is easily shown that for all and hence .we also obtain .* case iii : .we have and hence is maximized for : .thus , and it is easy to show that since .first , .we now prove the upper bound by determining the largest term in the double summation of . since , we assume without loss of generality .the triangular inequality indicates that if or ; also , by definition of the injection distance , if .we can hence restrict the range of parameters in the summation formula of as follows : by lemma [ lemma : nm ] and [ eq : gaussian ] , we have for and for , which with ( [ eq : range_vi ] ) yields where we make the following changes of variables : , , in ( [ eq : vi1 ] ) . since , we have . also , for , and hence ; similarly , we obtain .hence , ( [ eq : vi1 ] ) leads to where we set and use in ( [ eq : vi3 ] ) .10 t. ho , m. mdard , r. koetter , d. r. karger , m. effros , j. shi , and b. leong , `` a random linear network coding approach to multicast , '' _ ieee trans .info . theory _52 , no .44134430 , october 2006 . | codes in the projective space and codes in the grassmannian over a finite field referred to as subspace codes and constant - dimension codes ( cdcs ) , respectively have been proposed for error control in random linear network coding . for subspace codes and cdcs , a subspace metric was introduced to correct both errors and erasures , and an injection metric was proposed to correct adversarial errors . in this paper , we investigate the packing and covering properties of subspace codes with both metrics . we first determine some fundamental geometric properties of the projective space with both metrics . using these properties , we then derive bounds on the cardinalities of packing and covering subspace codes , and determine the asymptotic rates of optimal packing and optimal covering subspace codes with both metrics . our results not only provide guiding principles for the code design for error control in random linear network coding , but also illustrate the difference between the two metrics from a geometric perspective . in particular , our results show that optimal packing cdcs are optimal packing subspace codes up to a scalar for both metrics if and only if their dimension is half of their length ( up to rounding ) . in this case , cdcs suffer from only limited rate loss as opposed to subspace codes with the same minimum distance . we also show that optimal covering cdcs can be used to construct asymptotically optimal covering subspace codes with the injection metric only . network coding , random linear network coding , error control codes , subspace codes , constant - dimension codes , packing , covering , subspace metric , injection metric . |
quantum mechanics was originally introduced as a non commutative matrix calculus of observables by werner heisenberg ( heisenberg 1925 ) and parallel as a wave mechanics by erwin schrdinger ( schrdinger 1926 ) .both structurally very different theories , matrix mechanics and wave mechanics could explain fruitfully the early observed quantum phenomena .already in the same year the two theories were shown to be realizations of the same , more abstract , ket - bra formalism by dirac ( dirac 1958 ) .only some years later , in 1934 , john von neumann put forward a rigorous mathematical framework for quantum theory in an infinite dimensional separable complex hilbert space ( von neumann 1955 ) .matrix mechanics and wave mechanics appear as concrete realizations : the first one if the hilbert space is , the collection of all square summable complex numbers , and the second one if the hilbert space is , the collection of all square integrable complex functions .the formulation of quantum mechanics in the abstract framework of a complex hilbert space is now usually referred to as the ` standard quantum mechanics ' .the basic concepts - the vectors of the hilbert space representing the states of the system and the self - adjoint operators representing the observables - in this standard quantum mechanics are abstract mathematical concepts defined mathematically in and abstract mathematical space , and this is a problem for the physicists working to understand quantum mechanics .several approaches have generalized the standard theory starting from more physically defined basic concepts .john von neumann and garett birkhoff have initiated one of these approaches ( birkhoff and von neumann 1936 ) were they analyze the difference between quantum and classical theories by studying the ` experimental propositions '. they could show that for a given physical system classical theories have a boolean lattice of experimental propositions while for quantum theory the lattice of experimental propositions is not boolean .similar fundamental structural differences between the two theories have been investigated by concentrating on different basic concepts .the collection of observables of a classical theory was shown to be a commutative algebra while this is now the case for the collection of quantum observables ( segal 1947 , emch 1984 ) .luigi accardi and itamar pitowski obtained a analogous result by concentrating on the probability models connected to the two theories : classical theories have a kolmogorovian probability model while the probability model of a quantum theory is non kolmogorovian ( accardi 1982 , pitowski 1989 ) .the fundamental structural differences between the two types of theories , quantum and classical , in different categories , was interpreted as indicating also a fundamental difference on the level of the nature of the reality that both theories describe : the micro world should be ` very different ' from the macro world .the author admits that he was himself very much convinced of this state of affairs also because very concrete attempts to understand quantum mechanics in a classical way had failed as well : e.g. the many ` physical ' hidden variable theories that had been tried out ( selleri 1990 ) . in this paperwe want to concentrate on this problem : in which way the quantum world is different from the classical world .we shall do this in the light of the approach that we have been elaborating in brussels and that we have called the ` hidden measurement formalism ' .we concentrate also on the different paradoxes in quantum mechanics : the measurement problem , the schrdinger cat paradox , the classical limit , the einstein - podolsky - rosen paradox and the problem of non - locality .we investigate which ones of these quantum problems are due to shortcomings of the standard formalism and which ones point out real physical differences between the quantum and classical world .as we mentioned already in the foregoing section , the structural difference between quantum theories and classical theories ( boolean lattice versus non - boolean lattice of propositions , commutative algebra versus non commutative algebra of observables and kolmogorovian versus non kolmogorovian probability structure ) is one of the most convincing elements for the belief in a deep difference between the quantum world and the classical world . during all the years that these structural differences have been investigated ( mostly mathematically ) thesehas not been much understanding of the physical meaning of these structural differences .in which way would these structural differences be linked to some more intuitive but physically better understood differences between quantum theory and classical theory ? within the hidden measurement approachwe have been able to identify the physical aspects that are at the origin of the structural differences between quantum and classical theories .this are two aspects that both characterize the nature of the measurements that have to be carried out to test the properties of the system under study .let us formulate these two aspects carefully first ._ we have a quantum - like theory describing a system under investigation if the measurements needed to test the properties of the system are such that : _ ( 1 ) : : _ the measurements are not just observations but provoke a real change of the state of the system _ ( 2 ) : : _ there exists a lack of knowledge about the reality of what happens during the measurement process _ it is the lack of knowledge * ( 2 ) * that is theoretically structured in a non kolmogorovian probability model.in a certain sense it is possible to interpret the second aspect , the presence of the lack of knowledge on the reality of the measurement process , as the presence of ` hidden measurements ' instead of ` hidden variables ' . indeed ,if a measurement is performed with the presence of such a lack of knowledge , then this is actually the classical mixture of a set of classical hidden measurements , were for such a classical hidden measurement there would be no lack of knowledge . in an analogous way as in a hidden variable theory ,the quantum state is a classical mixture of classical states .this is the reason why we have called the formalism that we are elaborating in brussels and that consists in formalizing in a mathematical theory the physical situations containing the two mentioned aspects , the ` hidden measurement formalism ' .after we had identified the two aspects * ( 1 ) * and * ( 2 ) * it was not difficult to invent a quantum machine fabricated only with macroscopic materials and producing a quantum structure isomorphic to the structure of a two dimensional complex hilbert space , describing for example the spin of a quantum particle with spin ( aerts 1985 , 1986 , 1987 ) . this quantum machine has been presented in different occasions meanwhile ( aerts 1988a , b , 1991a , 1995 ) and therefore we shall only , for the sake of completeness , introduce it shortly here .the machine that we consider consists of a physical entity that is a point particle that can move on the surface of a sphere , denoted , with center and radius .the unit - vector where the particle is located on represents the state of the particle ( see fig .for each point , we introduce the following measurement .we consider the diametrically opposite point , and install a piece of elastic of length 2 , such that it is fixed with one of its end - points in and the other end - point in .0.7 cm 3 cm _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ = 12 pt fig . 1 : a representation of the quantum machine . in ( a ) the physical entity in state in the point , and the elastic corresponding to the measurement is installed between the two diametrically opposed points and . in( b ) the particle falls orthogonally onto the elastic and stick to it . in ( c )the elastic breaks and the particle is pulled towards the point , such that ( d ) it arrives at the point , and the measurement gets the outcome . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ once the elastic is installed , the particle falls from its original place orthogonally onto the elastic , and sticks on it ( fig 1,b ) .then the elastic breaks and the particle , attached to one of the two pieces of the elastic ( fig 1,c ) , moves to one of the two end - points or ( fig 1,d ) . depending on whether the particle arrives in ( as in fig 1 ) or in , we give the outcome or to .we can easily calculate the probabilities corresponding to the two possible outcomes .therefore we remark that the particle arrives in when the elastic breaks in a point of the interval ( which is the length of the piece of the elastic between and the point where the particle has arrived , or ) , and arrives in when it breaks in a point of the interval ( ) .we make the hypothesis that the elastic breaks uniformly , which means that the probability that the particle , being in state , arrives in , is given by the length of divided by the length of the total elastic ( which is 2 ) .the probability that the particle in state arrives in is the length of ( which is ) divided by the length of the total elastic .if we denote these probabilities respectively by and we have : these transition probabilities are the same as the ones related to the outcomes of a stern - gerlach spin measurement on a spin quantum particle , of which the quantum - spin - state in direction , denoted by , and the measurement corresponding to the spin measurement in direction , is described respectively by the vector and the self adjoint operator of a two - dimensional complex hilbert space . we can easily see now the two aspects in this quantum machine that we have identified in the hidden measurement approach to give rise to the quantum structure .the state of the particle is effectively changed by the measuring apparatus ( changes to or to under the influence of the measuring process ) , which identifies the first aspect , and there is a lack of knowledge on the interaction between the measuring apparatus and the particle , namely the lack of knowledge of were exactly the elastic will break , which identifies the second aspect .we can also easily understand now what is meant by the term ` hidden measurements ' .each time the elastic breaks in one specific point , we could identify the measurement process that is carried out afterwards as a hidden measurement .the measurement is then a classical mixture of the collection of all measurement : namely consists of choosing at random one of the and performing this chosen .first of all we remark that we have shown in our group in brussels that such a hidden measurement model can be built for any arbitrary quantum entity ( aerts 1985 , 1986 , 1987 , coecke 1995a , b , c ) . however , the hidden measurement formalism is more general than standard quantum theory .indeed , it is very easy to produce quantum - like structures that can not be represented in a complex hilbert space ( aerts 1986 ) . if the quantum structure can be explained by the presence of a lack of knowledge on the measurement process , we can go a step further , and wonder what types of structure arise when we consider the original models , with a lack of knowledge on the measurement process , and introduce a variation of the magnitude of this lack of knowledge .we have studied the quantum machine under varying ` lack of knowledge ' , parameterizing this variation by a number ] and ] is given and is the probability distribution corresponding to .we cut , by means of a constant function , a piece of the function , such that the surface contained in the cutoff piece equals ( step 1 of fig 3 ) .we move this piece of function to the -axis ( step 2 of fig 3 ) , and then renormalize by dividing by ( step 3 of fig 3 ) ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ suppose that is given , and the state of the quantum system is described by the wave function and is the corresponding probability distribution ( hence ) .we cut , by means of a constant function , a piece of the function , such that the surface contained in the cutoff piece equals ( see step 1 of fig 3 ) .we move this piece of function to the -axis ( see step 2 of fig 3 ) .and then we renormalize by dividing by ( see step 3 of fig 3 ) .if we proceed in this way for smaller values of , we shall finally arrive at a delta - function for the classical limit , and the delta - function is located in the original maximum of the quantum probability distribution .we want to point out that the state of the physical system is not changed by this -procedure , it remains always the same state , representing the same physical reality .it is the regime of lack of knowledge going together with the detection measurement that changes with varying . for regime is one of maximum lack of knowledge on the process of localization , and this lack of knowledge is characterized by the spread of the probability distribution . for an intermediate value of , between 1 and 0 , the spread of the probability distribution has decreased ( see fig3 ) and for zero fluctuations the spread is 0 .let us also try to see what becomes of the non - local behavior of quantum entities taking into account the classical limit procedure that we propose .suppose that we consider a double slit experiment , then the state of a quantum entity having passed the slits can be represented by a probability function of the form represented in fig 4 .we can see that the non - locality presented by this probability function gradually disappears when becomes smaller , and in the case where has only one maximum finally disappears completely .0.7 cm 3 cm _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ = 12 pt fig .4 : a representation of the probability distribution corresponding to the state of a quantum system that has passed a double slit .we show the three different steps of the procedure ( fig 3 ) in this case . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ when there are no fluctuations on the measuring apparatus used to detect the particle , it shall be detected with certainty in one of the slits , and always in the same one .if has two maxima ( one behind slit 1 , and the other behind slit 2 ) that are equal , the non - locality does not disappear .indeed , in this case the limit - function is the sum of two delta - functions ( one behind slit 1 and one behind slit 2 ) .so in this case the non - locality remains present even in the classical limit .if our procedure for the classical limit is a correct one , also macroscopic classical entities can be in non - local states .how does it come that we do nt find any sign of this non - locality in the classical macroscopic world ?this is due to the fact that the set of states , representing a situation where the probability function has more than one maximum , has measure zero , compared to the set of all possible states , and moreover these states are ` unstable ' .the slightest perturbation will destroy the symmetry of the different maxima , and hence shall give rise to one point of localization in the classical limit .also classical macroscopic reality is non - local , but the local model that we use to describe it gives the same statistical results , and hence can not be distinguished from the non - local model .it is interesting to consider the violation of bell s inequalities within the hidden - measurement formalism .the quantum machine , as presented in section 3 , delivers us a macroscopic model for the spin of a spin quantum entity and starting with this model it is possible to construct a macroscopic situation , using two of these models coupled by a rigid rod , that represents faithfully the situations of two entangled quantum particles of spin ( aerts 1991b ) .the ` non - local ' element is introduced explicitly by means of a rod that connects the two sphere - models .we also have studied this epr situation of entangled quantum systems by introducing the -variation of the amount of lack of knowledge on the measurement processes and could show that one violates the bell - inequalities even more for classical but non - locally connected systems , that is , .this illustrates that the violation of the bell - inequalities is due to the non - locality rather then to the indeterministic character of quantum theory . and that the quantum indeterminism ( for values of greater than 0 ) tempers the violation of the bell inequalities ( aerts d. , aerts s. , coecke and valckenborgh 1996 , aerts , coecke , durt and valckenborgh , 1997a , b )this idea has been used to construct a general representation of entangled states ( hidden correlations ) within the hidden measurement formalism ( coecke 1996 , 1998 and coecke , dhooghe and valckenborgh 1997 ) .accardi , l. , 1982 , _ nuovo cimento _ , * 34 * , 161 .aerts , d. , 1985 , a possible explanation for the probabilities of quantum mechanics and a macroscopic situation that violates bell inequalities " , in _ recent developments in quantum logic _ , eds .mittelstaedt , p. et al ., in _ grundlagen der exacten naturwissenschaften _ , vol.6 , wissenschaftverlag , bibliographisches institut , mannheim , 235 .aerts , d. , 1987 , the origin of the non - classical character of the quantum probability model " , in _ information , complexity , and control in quantum physics _ , eds .blanquiere , a. et al . , springer - verlag .aerts , d. , 1988a , the physical origin of the epr paradox and how to violate bell inequalities by macroscopic systems " , in _symposium on the foundations of modern physics _ ,lahti , p. et al ., world scientific , singapore .aerts , d. , 1988b , the description of separated systems and a possible explanation for the probabilities of quantum mechanics " , in _ microphysical reality and quantum formalism _ , eds . van der merwe et al . , kluwer academic publishers .aerts , d. , 1991a , a macroscopic classical laboratory situation with only macroscopic classical entities giving rise to a quantum mechanical probability model " , in _ quantum probability and related topics , vol .vi , _ ed .accardi , l. , world scientific , singapore .aerts , d. , 1991b , a mechanistic classical laboratory situation violating the bell inequalities with 2 , exactly ` in the same way ' as its violations by the epr experiments " , _ helv .acta _ , * 64 * , 1 .aerts , d. , aerts , s. , coecke , b. and valckenborgh , f. , 1996 , the meaning of the violation of bell inequalities : non - local correlation or quantum behavior ? " , preprint , clea , brussels free university , krijgskundestraat 33 , brussels .aerts , d. and durt , t. , 1994b , quantum , classical and intermediate : a measurement model " , in _ 70 years of matter - wave _ , eds .laurikainen , k.v ., montonen , c. and sunnarborg , k. , editions frontieres , gives sur yvettes , france .aerts , d. , durt , t. and van bogaert , b. , 1993 , quantum probability , the classical limit and non - locality " , in _ on the foundations of modern physics 1993 _ , ed .hyvonen , t. , world scientific , singapore , 35 .coecke , b. , dhooghe , b. and valckenborgh , f. , 1997 , classical physical entities with a quantum description " , in _ fundamental problems in quantum physics ii _ ,ferrero , m. and van der merwe , a. , kluwer academic , dordrecht , 103 . | in the hidden measurement formalism that we develop in brussels we explain the quantum structure as due to the presence of two effects , ( a ) a real change of state of the system under influence of the measurement and , ( b ) a lack of knowledge about a deeper deterministic reality of the measurement process . we show that the presence of these two effects leads to the major part of the quantum mechanical structure of a theory describing a physical system where the measurements to test the properties of this physical system contain the two mentioned effects . we present a quantum machine , where we can illustrate in a simple way how the quantum structure arises as a consequence of the two effects . we introduce a parameter that measures the amount of the lack of knowledge on the measurement process , and by varying this parameter , we describe a continuous evolution from a quantum structure ( maximal lack of knowledge ) to a classical structure ( zero lack of knowledge ) . we show that for intermediate values of we find a new type of structure that is neither quantum nor classical . we analyze the quantum paradoxes in the light of these findings and show that they can be divided into two groups : ( 1 ) the group ( measurement problem and schrdingers cat paradox ) where the paradoxical aspects arise mainly from the application of standard quantum theory as a general theory ( e.g. also describing the measurement apparatus ) . this type of paradox disappears in the hidden measurement formalism . ( 2 ) a second group collecting the paradoxes connected to the effect of non - locality ( the einstein - podolsky - rosen paradox and the violation of bell inequalities ) . we show that these paradoxes are internally resolved because the effect of non - locality turns out to be a fundamental property of the hidden measurement formalism itself . = 13 pt fund and clea , brussels free university , krijgskundestraat 33 , 1160 brussels , belgium , e - mail : diraerts.ac.be |
although sometimes sidestepped , it is a general requirement in studying non vacuum spacetimes in general relativity that the energy momentum tensor , that is the matter ( field ) contents , should have a clear , although possibly highly idealized , physical interpretation . among these choicesthe case where matter is described as a large ensemble of particles that interact only through the gravitational field that they themselves , at least partially , create , is of particular interest , both because of their usefulness in modeling physical systems such as star o galaxy clusters , and of the possibility of a relatively detailed analysis , at least in some restricted cases . as usual in theoretical treatments ,one starts imposing as many restrictions as compatible with the central idea , and then tries to generalize from these cases . in this respect , the restriction to static spherically symmetric systems provides an important simplification , although even with this restriction the problem is far from trivial , and further restrictions have been imposed in order to make significant advances .one of the first concrete examples is that provided by the einstein model , where the particles are restricted to move on circular orbits .this model is static , and it is not easy to generalize as such to include a dynamical evolution of the system .this generalization can , however , be achieved if the particle world lines are restricted to a shell of vanishing thickness ( `` thin shell '' ) , as considered by evans in .the analysis in , although , motivated by the einstein model , considers only shells where all the component particles have the same value of their ( conserved ) angular momentum . in a recent study of the dynamics of spherically symmetric thin shells of counter rotating particles ,of which is an example , it was found that the analysis can be extended to shells where the particles have angular momenta that take values on a discrete ( but possibly also continuous ) set , and is not restricted to a single value .it was also found that in the non trivial thin shell limit of a thick einstein shell the angular momentum of the particles acquires a unique continuous distribution , and , therefore , the models in and are not approximations to an einstein model .a relevant question then is what , if any , are the ( thick ) shells that are approximated by those in and . in this paperwe look for an answer to this question by considering a generalization of the einstein model where instead of circular orbits we impose , at first , the restriction to a single value of the angular momentum .the particle contents is described by a distribution function in phase space , and , because of the assumption of interaction only through the mean gravitational field , must satisfy the einstein - vlasov equations . in the next section we set up the problem and show that it leads to a well defined set of equations . in section iii we set up and analyze a particular model , obtaining expansions for the metric functions at the boundary of the support of , appropriate for numerical analysis .further properties are analyzed in section iv , where we show that all these shells have finite thickness .section v contains numerical results for a generic example .the `` thin shell '' limit is considered in section vi , both through analytic arguments and a concrete numerical example , with the results showing total agreement with the thin shell results of .a further comparison with is carried out in section vii , where the stability of a shell approaching the thin shell limit is considered . the generalization to more than one value of is given in section viii , where we find that particles with different values of may be distributed on shells that overlap completely , or do so only partially or not at all .numerical examples and a comparisons with are finally developed in section ix . some comment and conclusionsare given in section x.the metric for a static spherically symmetric spacetime may be written in the form , where is the line element on the unit sphere and . for a static , spherically symmetric system ,the matter contents , in this case equal mass collisionless particles , is microscopically described by a distribution function , where are the components of the particle momentum , taken per unit mass .then , as a consequence of the assumption that the particles move along geodesics of the space time metric , the distribution function satisfies the vlasov equation , which , in this case , takes the form , where correspond to .it is understood in ( [ vlasov1 ] ) that is to be computed using , so as to satisfy the `` mass shell restriction '' , where is the particles mass .therefore , in what follows we set , }\ ] ] we also notice that , where is proper time along the particle s world line .+ the einstein equations for the system are , with the energy momentum tensor given by , where is the determinant of , and . equations ( [ vlasov1 ] ) , ( [ einst1 ] ) and ( [ tmunu1 ] ) define the einstein - vlasov system restricted to a static spherically symmetric space time , with the metric written in the form ( [ one1 ] ) .the assumption that the metric is static and spherically symmetric implies conservation of the particle s energy , }\end{aligned}\ ] ] and of the square of its angular momentum per unit mass , \ ] ] it is easy to check that the ansatz , where , and are the functions of , and given by ( [ energy1],[lsquare ] ) , solves the vlasov equation for an arbitrary function . + to construct and solve explicit models based on ( [ ansatz01 ] ) for the metric ( [ one1 ] ) , it is convenient to change integration variables in ( [ tmunu1 ] ) .we set , and write ( [ tmunu1 ] ) in the form , where we should set , andreasson and rein have explored the properties of models where takes the form , in this note we consider a different type of models , based on the ansatz , where is the heaviside step function , and is dirac s ; namely , we assume that takes only the single value , and that there is an upper bound on , given by . is assumed to be a smooth function of .we then have , where depends on and is given by , if , and otherwise .this simply states the fact that only in those regions where a ( test ) particle with energy and angular momentum can actually move .we may now use the previous results to construct simple models and analyze their interpretation for a range of possible parameters .this analysis may be carried out in a number of ways . herewe choose the following ; we first use a gauge freedom in the metric ( [ one1 ] ) to set , then , from the einstein equations and the form ( [ tmunu2 ] ) of , we find two independent equations for and , where , is the energy density , and is the radial pressure , given by ( [ tmunu2 ] ) , with . there is also an equation for , but , as can be checked , this is not independent of ( [ two2 ] ) . equations ( [ two2 ] ) are deceivingly simple , because the explicit dependence of and on and is in general quite complicated . herewe consider a simple example and propose a method for constructing the solutions , that is illustrated by the example .it can be seen that some simplification is attained if we choose , where is a constant . with this choicewe may perform the integrals in ( [ tmunu2 ] ) explicitly and , after some simplifications , we get , \sqrt{r^2 e_0 ^ 2 - ( l_0 ^ 2+r^2)b}}{r^3 b } \\ \label{two4b } \frac{d b}{dr } & = & \frac{2 m b}{r(r-2 m ) } + \frac{2q_2\left[r^2 e_0 ^ 2- ( l_0 ^ 2+r^2)b\right]^{3/2}}{r^3(r-2m)}\end{aligned}\ ] ] where .we also find , \sqrt{r^2 e_0 ^ 2 - ( l_0 ^ 2+r^2)b}}{4 \pi r^5 b } \nonumber \\p(r ) & = & \frac{q_2\left[r^2 e_0 ^ 2- ( l_0 ^ 2+r^2)b\right]^{3/2}}{4 \pi b r^5 } \nonumber \\p_t(r ) & = & \frac{3 q_2 l_0 ^ 2 \sqrt{r^2 e_0 ^ 2- ( l_0 ^2+r^2)b}}{8 \pi r^5}\end{aligned}\ ] ] considering ( [ two4a],[two4b ] ) , we find that it is a simple , but rather difficult to handle , system of equations for and .we have not found closed ( analytical ) solutions for the system , and , therefore , we must resort to numerical methods .the application of these methods requires , however , considering and solving several subtleties inherent in the system .as indicated above , in all these equations ( ( [ two4a],[two4b ] ) and ( [ two5 ] ) ) , the terms involving should be set equal to zero if .we notice that for we have with , and , where is also a constant , corresponding to the standard schwarzschild solution . for a shell type solution , these solutions correspond to the inner and outer regions , to be matched to the region where .when , since we must have , we must also have , but , even though we might end up with , and still have all equations satisfied . for shell like solutions , either with an empty interior or with a central mass ( black hole ) , a further difficulty can be seen considering that there should exist an `` allowed region '' where , with taking values in the interval , where and are , respectively , the inner and outer radii of the shell .we must impose continuity in both and to avoid functions in .this implies that is continuous in and approaches continuously the value zero at the boundaries .therefore , both and are also continuous inside and at the boundaries of this interval , and actually we have .we also find that should be continuous , but must be singular , and this makes the construction of numerical solutions where we try to fix from the beginning the values of and rather difficult .nevertheless , the above analysis indicates that for , but , we should have , where , , and are constants and and are functions of that vanish respectively faster than and for .it is straightforward to extend this analysis to higher order by imposing consistency between the right and left hand sides of ( [ two4a],[two4b ] ) as .we find , where , , and are constants , and and stand for higher order terms .the constants appearing in ( [ two7 ] ) are not independent .they may be written , e.g. , in terms of , , , and .we notice that is the schwarzschild mass for the region inside the shell ( ) . moreover , the system ( [ two4a],[two4b ] ) is invariant under the the rescaling , , and .the condition that corresponds to the inner boundary of the shell implies , similarly , we find , the explicit expressions for and are also easily obtained but are rather long and will not be included here , although they were used in the numerical computations described below .an interesting question regarding the model of the previous section is related to the possible values that the thickness of the shells can attain .this may be analyzed by considering the limit of solutions of the system ( [ two4a],[two4b ] ) as , under the restrictions that , and .the first , according to ( [ two4a],[two4b ] ) and ( [ two5 ] ) , implies , and .we remark that .therefore , must approach monotonically from below .consider first the case .replacing in the first equation in ( [ two4a],[two4b ] ) , for large we find that approaches a constant value , and , therefore , grows linearly with .but then , replacing in the second equation in ( [ two4a],[two4b ] ) , we find that decreases as , leading to a logarithmic growth in , incompatible with the assumed conditions. therefore , any possible solution should have .then , for large we should have , with , as . replacing now in ( [ two4a],[two4b ] ) , to leading order we find , and this implies that as . then , using again ( [ two4a],[two4b ] ) , we should have , and this implies , which contradicts the assumption .thus we conclude that the equation , must always be satisfied for finite , and , therefore , _ all shells constructed in accordance with the prescription ( [ two3 ] ) have finite mass and finite thickness_. we nevertheless believe that this results is more general , and applies to all shells satisfying the ansatz ( [ model01 ] ) , although we do not have a complete proof of this statement .as indicated , we do not have closed solutions of the equations for and , even for the simple model of the previous section .nevertheless , since ( [ two4a],[two4b ] ) is a first order ode system , we can apply numerical methods to analyze it .we may use the expansions ( [ two7 ] ) , ( disregarding the terms in and ) , to obtain appropriate initial values for and , for close to , in the non trivial region .we may illustrate this point with a particular example .we take , , , and . using these values and ( [ two7 ] )( truncated as indicated above ) , we find , and ( actually , the computations were carried out to 30 digits , using a runge - kutta integrator ) .the numerical results are plotted in figure 1 and figure 2 .one of the motivations for studying the type of shells considered in this paper is the possible existence of a non trivial `` thin shell '' limit , where the thickness of the shell goes to zero , with the restriction to a single or a finite set of values of , and how this limit compares with the thin shells considered in and .we remark that the existence of thin shell limits of einstein - vlasov systems has already been analyzed in the literature . herewe are interested not only in the existence of this limit for our particular models , but especially in the limiting values of the parameters characterizing our shells .since this type of analysis is not immediately included in , e.g , , we consider it relevant to provide an explicit proof of the properties of our models in the thin shell limit .we first recall that for a static thin shell constructed according to evans prescriptions , we have the following relation between the radius , inner ( ) and outer ( ) mass , and angular momentum of the particles , we may now prove that the non trivial thin shell limits of the shells constructed according to the prescription ( [ two3 ] ) effectively coincide with the evans shells of reference as follows .we first take the derivative of ( [ two4b ] ) , and then use ( [ two4a ] ) and ( [ two4b ] , to obtain , }{r(e^2 r^2 + 2(l^2+r^2)b)(r-2m)}\frac{dm}{dr } \nonumber \\ & & -\frac{4 } { r^2(r-2m)}\left[r(r-2m)\frac{db}{dr}-m b\right]\end{aligned}\ ] ] next let and be , respectively , the inner and outer radii of the shell , and and the corresponding masses inside and outside the shell .then , for we have , and , the idea now is to use the fact that , on this account we rewrite ( [ thin01 ] ) in the form , }\frac{d^2b}{dr^2 } \nonumber \\ & & + \frac{4 ( e^2 r^2 + 2b(l^2+r^2))\left[r(r-2m)\frac{db}{dr}-m b\right ] } { r ( r-2m)\left [ r(2 e^2 + ( r^2+l^2)b)\dfrac{db}{dr}+2(4e^2r^2+(2l^2-r^2)b)\right]}\end{aligned}\ ] ] and integrate both sides from to .but now we notice that while both and are rapidly changing but bounded in , the change of , and in that interval is only of order . we may then choose a point , with , and set , and , except in the arguments of , , and , in ( [ thin04 ] ) , as this introduces errors at most of order , in the factors of , and , and in the last term in the right hand side of ( [ thin04 ] ) .similarly , we may set in ( [ thin04 ] ) , to obtain , up to terms of order , } { r ( r-2m)\left[r(r^2+l^2)\dfrac{db}{dr}+2(r^2 + 2 l^2)b_0\right]}\end{aligned}\ ] ] and , again , we notice that the last term on the right of ( [ thin05 ] ) gives a contribution of order .we then conclude that , the integration of the terms in , and , is now straightforward .we use next ( [ thin02 ] ) and the fact that in this limit and , to obtain , solving this equation for , we finally find , which is , precisely , the relation satisfied by the parameters of the shells of ( [ five1 ] ) . we can also check this result , and , in turn , the accuracy of numerical codes , by directly considering initial data for the numerical integration that effectively lead to shells where the thickness is a small fraction of the radius .a particular example is given in figure 3 , where the values of the initial data is also indicated .we can see that the mass increases by about percent , while the thickness of the shell is less than percent of the shell radius .we can check that these results are in agreement with ( [ five1 ] ) . solving this for , } { ( r-2m_1)(r^2 + 3 \tilde{l}_0 ^ 2)^2 } \ ] ] and replacing , , and ,we find in very good agreement with the numerical results quoted in figure 3 .consider a shell approaching the thin shell limit .we restrict to the case of vanishing inner mass .if and are , respectively the inner and outer boundaries of the shell , the matching conditions in the absence of singular shells at and imply that both and and their first derivatives should be continuous at both and . the form ( [ one1 ] ) of the metric with the choice ( [ two1 ] ) imply that for we have , where is a constant .then , at we have , for , the metric takes the form , where is another constant satisfying , consider now a test particle moving along a geodesic of the shell space time , with 4-velocity , with . without loss of generality we may choose .then we have the constants of the motion : and the normalization of implies , therefore , for we have , and the particle radial acceleration is given by , if we assume now that the shell is close to the thin shell limit , with angular momentum , radius , ( with ) , and mass , then we should have , then , for , and we find , similarly , in the region , for we find , and , therefore , all these shells are stable under `` single particle evaporation '' , in total agreement with the results of .a related problem is that of the dynamical stability of the shell as a whole , as was also analyzed in .there , the shells considered where `` thin '' , and therefore , the motion was described by ordinary differential equations for the shell radius as a function , e.g. , of proper time on the shell , which allowed for a significant simplification of the treatment of the small departures from the equilibrium configurations .unfortunately , the corresponding equations of motion for the shells considered here would be considerably more complicated and their analysis completely outside the scope of the present research . we , nevertheless , expect that such treatment , if appropriately carried out , would also agree with the results found in as the `` thin shell '' limit is approached .it is rather simple to extend the analysis of the previous sections to the case where the angular momentum of the particles takes on a discrete , finite set of values . instead of ( [ model01 ] ), we have , where the functions are arbitrary , with , and finite .we will restrict to the case of two separate values , ( n=2 ) , since , as will be clear from the treatment , the extension to a larger number of components is straightforward .we are actually interested in the behaviour of these shells as they approach a common thin shell limit .therefore , we will further simplify our ansatz to the form , where are constants . with this choice we may perform the integrals in ( [ tmunu2 ] ) explicitly and , after some simplifications , we get , \sqrt{r^2 e_1 ^ 2 - ( l_1 ^ 2+r^2)b}}{r^3 b } \nonumber \\ & & + \frac{c_2 \left[2(l_2 ^ 2+r^2)b+r^2e_2 ^ 2\right]\sqrt{r^2 e_2 ^ 2 - ( l_2 ^ 2+r^2)b}}{r^3 b } \nonumber \\\frac{d b}{dr } & = & \frac{2 m b}{r(r-2 m ) } \nonumber \\ & & + \frac{2c_1\left[r^2 e_1 ^ 2- ( l_1 ^ 2+r^2)b\right]^{3/2}}{r^3(r-2 m ) } + \frac{2c_2\left[r^2 e_2 ^ 2- ( l_2 ^ 2+r^2)b\right]^{3/2}}{r^3(r-2m)}\end{aligned}\ ] ] where , .we also find , \sqrt{r^2 e_1 ^ 2 - ( l_1 ^ 2+r^2)b}}{4 \pi r^5 b } \nonumber \\ & & + \frac{c_2 \left[2(l_2 ^ 2+r^2)b+r^2e_2 ^ 2\right]\sqrt{r^2 e_2 ^ 2 - ( l_2 ^ 2+r^2)b}}{4 \pi r^5 b } \nonumber \\p(r ) & = & \frac{c_1\left[r^2 e_1 ^ 2- ( l_1 ^ 2+r^2)b\right]^{3/2}}{4 \pi b r^5 } + \frac{c_2\left[r^2 e_2 ^ 2- ( l_2 ^ 2+r^2)b\right]^{3/2}}{4 \pi b r^5 } \nonumber \\p_t(r ) & = & \frac{3 c_1 l_1 ^ 2 \sqrt{r^2 e_1 ^ 2- ( l_1 ^ 2+r^2)b}}{8 \pi r^5 } + \frac{3 c_2 l_2 ^ 2 \sqrt{r^2 e_2 ^ 2- ( l_2 ^ 2+r^2)b}}{8 \pi r^5}\end{aligned}\ ] ] it is clear that we recover the results of the previous sections if we set either or equal to zero .+ it will be convenient to define separate contributions to the density , and , for the particles with and .\sqrt{r^2 e_1 ^ 2 - ( l_1 ^ 2+r^2)b}}{4 \pi r^5 b } \nonumber \\\rho_2(r ) & = & \frac{c_2 \left[2(l_2 ^ 2+r^2)b+r^2e_2 ^ 2\right]\sqrt{r^2 e_2 ^ 2 - ( l_2 ^ 2+r^2)b}}{4 \pi r^5 b}\end{aligned}\ ] ] then , provided the integrations cover the supports of both and , we have , where these expressions may be considered as the contributions to the mass from each class of particles .this will be used in the next section to compare numerical results with the thin shell limit .equations ( [ twodos4 ] ) may be numerically solved for appropriate values of the constants , , and , and initial values , i.e. , for some , of and .we remark that , as in the previous sections , it is understood in ( [ twodos4 ] ) , ( and also in ( [ twodos5 ] ) ) , that both and ^{3/2}$ ] must be set equal to zero for . in this general case , it is clear that the shells ( where by a `` shell '' we mean here the set of particles having the same angular momentum ) may be completely separated or they may overlap only partially .we are particularly interested in the limit of a common thin shell for the chosen values of . one way of ensuring that at least one of the shells completely overlaps the other is the following .we choose an inner mass , and an inner radius .this implies , while is arbitrary .if we choose now arbitrary values for and , the density will vanish at if we choose , actually we also need to impose , to make sure that is the _ inner _ and not the _ outer _ boundary of the shells .we shall assume from now on that . since , using the same arguments as in the single shell case , the shells have finite extension , it follows that one of the shells will be completely contained in the other . in the next section we display some numerical results , both for thick shells that overlap partially , and for shells approaching the thin shell limit .we again find that the limit is associated to large values of the , and that the parameters describing the shells approach the thin shell values found in .as a first example we take , , , and .we also set , for simplicity .then , from ( [ twodos6 ] ) , we set , . finally , we choose , , and carry out the numerical integration .the results obtained indicate that the particles with are contained in the region , while for the corresponding range is . the resulting value of the external mass in . in figure 4we display the total density as a function of ( solid curve ) , as well as the contributions and to the density from the particles with respectively ( dashed curve ) and with ( dotted curve ) . as an illustration of the approach to the thin shell configuration we considered again the previous values , , , , , but choose , , and carried out the numerical integration .figure 5 displays the functions ( solid curve ) , ( dashed curve ) , and ( dotted curve ) .we see that now the shell extends only to the region , i.e. , its thickness is less than one percent of its radius . the mass , on the other hand , increases by roughly a factor of two , since . we may compare these results with those of the thin shell limit of as follows .it can be seen from ( 29 ) and ( 31 ) in that for a thin shell of radius , inner mass and outer mass , with two components with angular momenta , and , the ratio of the contributions of each component to the total mass is given by , where is given by ( [ five1 ] ) .the numerical integration gives , . if now take , and solve ( [ ratio1 ] ) for we find , , while replacement in ( [ five1 ] ) gives , which we consider as a good agreement , with a discrepancy of the order of the ratio of thickness to radius .the general conclusion from this work is that one can effectively construct a wide variety of models satisfying the restriction that takes only a finite set of values , and that they do seem to contain the models used in as appropriate thin shell limits .we remark also that the starting point for our construction is a variant of the ansatz used in , where is factored in an ( the particle energy ) and an dependent terms .the possibility of multi - peaked structure in the case of more than one value of obtained here is also in correspondence with the general results obtained in .this work was supported in part by grants from conicet ( argentina ) and universidad nacional de crdoba . rjg and mar are supported by conicet .we are also grateful to h. andreasson for his helpful comments .99 r. j. gleiser , m. a. ramirez _ class .quant . grav _ * 26*,045006(2009 ) a. b. evans _ gen .grav . _ * 8*,155(1977 ) a. einstein , _ ann ._ * 40*,922(1939 ) h. andreasson and g. rein _ class .grav . _ * 24*,1809(2007 ) for a recent review of the einstein - vlasov system and references see , for example .h. andreasson , `` the einstein - vlasov system / kinetic theory '' , _ living rev .relativity _ , * 8*,2 ( 2005 ) .http://www.livingreviews.org/lrr-2005-2 , and references therein .h. andreasson , _ commun .phys . _ * 274 * , 409 - 425 ( 2007 ) . | in this paper we study static spherically symmetric einstein - vlasov shells , made up of equal mass particles , where the angular momentum of particles takes values only on a discrete finite set . we consider first the case where there is only one value of , and prove their existence by constructing explicit examples . shells with either hollow or black hole interiors have finite thickness . of particular interest is the thin shell limit of these systems and we study its properties using both numerical and analytic arguments to compare with known results . the general case of a set of values of is also considered and the particular case where takes only two values is analyzed , and compared with the corresponding thin shell limit already given in the literature , finding good agreement in all cases . |
sipms are excellent photosensors for low - intensity light detection . with increasing light intensities , however , their response becomes non linear requiring monitoring . in the calice ahcal prototype we installed an led / pin - diode - based monitoring system to measure the sipm gain ,monitor the sipm response for fixed light intensities and record the full sipm response function when necessary .we also record the reverse bias voltage of each sipm and the temperature measured by five sensors in each layer with a slow - control system , since the sipm response is very sensitive to changes in these parameters .we studied the sipm response function in test beam calibration runs .if we find an analytic function that parameterizes the sipm response , the monitoring system might be simplified .the exact shape would be measured once on the test bench before installation .the raw energy in a cell measured in units of adc bins , ] = \frac{q^{meas}_{cell}[adc]}{c^{mip}_{cell}[adc ] } \cdot f^{-1}_{sat}(q^{meas}_{cell}[pixel]),\ ] ] where )$ ] is the non - linear response parameterized as a function of pixels .it includes an intercalibration factor matching the pixel and mip calibration scales .we measure the sipm gain with low - intensity led light , the position of the mip peak with muons and the non - linearity by varying the led light intensity . gain and light yield vary with temperature ( reverse bias voltage ) as ) and ) , respectively .the standard procedure consists of adjusting and for temperature changes . here , corrections are instantaneous but non - local , since is measured frequently at five positions per layer during a run . since the gain of each cellis measured several times a day , we could also correct for gain changes by .this procedure is local but not instantaneous .figure [ fig : energy ] shows energy and energy resolution measurements of 10 50 gev positrons in comparison to simulations .the data is corrected for saturation and temperature effects using cell - wise gain and mip calibration constants and layer - wise average temperature correction factors .the reconstructed energy is linear up to 30 gev . at 50 gev deviations from linearity increase to suggesting the need for a refined analysis procedure .the energy resolution fits the standard form containing a stochastic term , a constant and a noise term .the bare simulations demonstrate that detector effects are important .simulations that include detector effects , however , are still too optimistic and require refinements .data ( dots ) , bare simulations ( open circles ) and simulations including detector effects ( triangles ) . lines show fits and shaded regions systematic uncertainties.,title="fig:",scaledwidth=48.0% ] data ( dots ) , bare simulations ( open circles ) and simulations including detector effects ( triangles ) . lines show fits and shaded regions systematic uncertainties.,title="fig:",scaledwidth=48.0% ]we extract sipm and pin diode raw data from lcio files , perform pedestal subtraction with vcalib runs , apply gain corrections , and use intercalibration constants . for each vcalib value we perform gaussian fits to the sipm and pin diode response to determine mean values and their errors .we plot the pin - diode response versus the sipm response after rescaling the pin - diode values to start at a common origin with a slope of one .so far we analyzed four calibration runs from the cern test beam in 2006 and 2007 . at itep , the response of all 7608 sipms was measured prior to installation into the ahcal prototype using calibrated led light shone directly onto the sipm .figure [ fig : curves ] ( top ) shows the itep measurements after scaling raw data to start at a common origin with slope one .the curves fit to , where the saturation and are free parameters . in the test beam the sipm responseis measured with high - gain and low - gain preamplifier settings .since the curves do not fit to , we constructed the function ,\ ] ] where ( with and are free parameters in the fit .the latter is a scale factor that accounts for mismatch between high - gain and low - gain regions . with simpler forms we obtained fewer successful fits .r0.5 figures [ fig : curves](middle , bottom ) show response curves of a typical sipm for itep and test beam data , respectively .fitting all available sipms in the four calibration runs , we get 5060 successful fits . in the 31 - 07 - 07 run of all fits were successful , failed due to a small confidence level and failed due to malfunctioning of the sipm or pin diode . for successful fits , parameter ( ) peaks at with fwhm of .apart from a few outliers at higher values , a spike near zero is visible .parameter peaks near with fwhm of , while parameter lies around with fwhm of .figure [ fig : saturation](left ) shows saturation values for 12 - 07 - 07 and 31 - 07 - 07 calibration runs . for the later run , the saturation peaks near pixels and has a width of fwhm pixels , while for the earlier run the peak is shifted upward by pixels .this is consistent with expected temperature variations that are not corrected for .in addition , we see a long tail up to pixels in both runs that is also visible in itep data .this results from a particular batch of sipms with a lower internal resistor causing multiple pixel excitations during illumination .a comparison of saturation values of sipms in modules 315 for four runs taken in 2006 and 2006 shows no degradation thus confirming stable operation in the test beam .figure [ fig : saturation](right ) shows a comparison of saturation values measured in the 31 - 07 - 07 run and at itep .the itep data peak around pixels with a fwhm of pixels indicating that nearly all 1156 pixels in the sipms are triggered . here , the sipms were illuminated directly by leds , while the light transport in the test beam is rather complex .the sipm records the light from a 1 mm thick wavelength - shifting ( wls ) fiber coupled via an air gap .though a 0.2 mm air gap is sufficient for a full illumination of all pixels , losses may occur due to imperfect alignment of the fiber and sipm .because of the large discrepancy between itep and test beam data , the non - linearity corrections are presently based on the itep saturation measurements with an additional scaling by the ratio of saturation values measured for each sipm in the test beam and at itep .we found an analytical function that successfully parameterizes about of the sipm response curves in the calice ahcal prototype . in order to improve this close to we need to investigate causes for poor fits and test other analytical functions .the present studies show that the sipm operation in the cern test beam is stable .we observe a discrepancy of saturation values measured at itep and in the test beam of about , which may be caused by a misalignment of the wls fiber and the sipm .we need to verify that we can model temperature and voltage changes in the sipm response curves and we need to extend our studies to include 2008 test beam data at fermilab . at the present level of understanding a full monitoring systemis necessary that allows us to measure the full sipm response any time .the system , however , may be simplified .two options are under discussion .the first option is based on the present monitoring system but foresees long clear fibers , each illuminating one row of tiles rather than individual tiles .this would reduce the number of leds but it may not achieve sufficient light intensities in all tiles ( see j. zalesak s talk ) .the second option consists of embedding one led per tile eliminating fibers but requiring a huge number of led s .system tests started to optimize led positions , check the homogeneity of the response and test different led types .the light calibration is compared to the response of a radioactive source .the system will be temperature controlled .first tests show no cross - talk , but optimization for dynamic range and led uniformity is needed .99 v. andreev et al . , nucl.instrum.meth.a540 , 368 ( 2005 ) .n. feege , diplom thesis , 78 pp , universitt hamburg ( 2008 ) . n. meyer et al ( calice collaboration ) , can-014 , 5pp ( 2008 ) .t. buanes , phd thesis , 129 pp , university of bergen ( 2008 ) .j. zalesak , proceedings of lcws08 , 4pp ( 2009 ) . | we present herein our experience with the calibration system in the calice ahcal prototype in the test beam and discuss characterizations of the sipm response curves . |
during experimental data processing often there is a task to fit arbitrary distribution with the only peak to some function with a set of free parameters .this can be necessary for more precise determination of peak position , or full width at half maximum ( devise resolution ) , or for the signal form approximation using monte carlo events for subsequent analysis of data distribution and so on .available set of fitting functions does not always matches the experimental requirements , especially for high statistics ( it is difficult to achieve suitable confidence level ) . in practice most oftenthe sum of the three gaussian distributions is used to fit such histograms . in most casesthis fit provides suitable confidence level .at least one can add more and more gaussians until confidence level is admissible .disadvantage of this method is that every gaussian used is symmetric , so in principle one can not used sum of gaussians with common center to fit asymmetric distributions , and if all gaussians have different centers , then it is difficult to provide the only maximum of fitting function and monotony to the right and left of maximum .another often used function is a spline of two halves of different gaussians .this function is good in case that the function `` tails '' can be approximated with gaussian .however for high statistics the major experimental distributions far of peak have exponential or power - of - x dependence , which do not match gaussian .logarithmic gaussian distribution is also often used ( at slac it is called `` novosibirsk '' function ) : ,\;\ ; \int\limits_{\frac{x_m - x}{x_m - x_p}>0}\!\!\!\!\!f_n{\,\mathrm{d}}x=1,\ ] ] where is a peak location ( function maximum ) , is full width at half maximum ( fwhm ) , is asymmetry parameter , is normalization factor . as is seen from ( [ eq : xm - expression ] ) , for the boundary coordinate and all .if , then and .using these notations one can rewrite this function , excluding the inconvenient variable : . \end{array}\ ] ] function is equal to 0 for all , for which the logarithm argument is negative .possible values of parameters : , is arbitrary , is arbitrary . for formulae have an ambiguity of the type , so some parameters should be expanded to the tailor series .{c}\sim \\[-2mm]\scriptstyle\lambda\sim 0\end{array } 1+\frac{h\left|\lambda\right|}{2}+\frac{h^2\lambda^2}{8},\ ] ] therefore {c}\sim \\[-2mm]\scriptstyle\lambda\sim 0\end{array } \lambda\cdot\left[1-\frac{h^2\lambda^2}{8 } + \frac{7h^4\lambda^4}{128}\right],\ ] ] {c}\sim \\[-2mm]\scriptstyle\lambda\sim 0\end{array } \frac{h^2\lambda^2}{8\ln 2}\cdot\left [ 1-\frac{h^2\lambda^2}{3 } \right],\;\;\ ; \frac{\sigma}{\left|\lambda\right| } \begin{array}[t]{c}\sim \\[-2mm]\scriptstyle\lambda\sim 0\end{array } \frac{h}{2\sqrt{2\ln 2}}\cdot\left[1-\frac{h^2\lambda^2}{6 } \right],\ ] ] {c}\sim \\[-2mm]\scriptstyle\lambda\sim 0\end{array } \frac{4\ln 2 \left(x - x_p\right)^2}{h^2}\cdot\left [ 1-\left(x - x_p\right)\lambda \right].\ ] ] for the function converts to ,\ ] ] that is gaussian distribution with root mean square this function is convenient for fitting the distributions with abrupt spectrum end .however many experimental distributions are more smooth , and suggested in this paper function can be more successful .it is suggested to build the fitting function on the base of convolution of gaussian and exponential distributions , which can be easily derived : \times \\[3mm]\rule{35mm}{0mm}\times \exp\left [ -\frac{x - x_g}{\lambda } + \frac{\sigma_g^2}{2\lambda^2 } \right ] .\end{array}\ ] ] integral of this function over all equals 1 .such a function was first used by the author in 2004 at slac ( babar note # 582 ) for fitting the deposited energy distributions in calorimeter with the aim of peak position and resolution determination for the algorithms of absolute photon energy calibration , and despite some technical difficulties , this function proved to be enough convenient for fitting such distributions , especially for high statistics .technical difficulties appear when the argument of erf function is big if the argument of erf function in formula ( [ eq : fitcurve1explicit ] ) is denoted as then the formula ( [ eq : fitcurve1explicit ] ) looks like : \exp\left[\frac{z\sigma_g\sqrt{2}}{\left|\lambda\right|}- \frac{\sigma_g^2}{2\lambda^2 } \right].\ ] ] if , then exponential index also goes to infinity , and so the ambiguity of the type arises . because of finite accuracy of computer calculations , this difficulty appears rather fast , just for moderate values of . to avoid this problem one can use the asymptotic expansion , \;\;\ ; k_0\leq z^2 .\end{array}\ ] ] substituting this expansion to ( [ eq : fitcurve1reduced ] ) , we obtain : \cdot \left[1+\sum\limits_{k=1}^{k_0}\frac{(-1)^k(2k-1)!!}{2^kz^{2k } } \right ] .\end{array}\ ] ] here for big values no ambiguities appear , function goes to 0 .this very expansion allows to find the limit for . indeed , and \ ] ] let us consider the limit . and substituting this to ( [ eq : fitcurve1reduced ] ) , we get \frac{1}{\left|\lambda\right|}\exp\left[-\frac{(x - x_g)}{\lambda}\right ] , \;\ ; \frac{(x - x_g)}{\lambda}>0 . \end{array}\right.\ ] ] function plots for several sets of parameters are presented in fig.[sampleoffb1 ] .= 0.49 = 0.49 sometimes the integral distribution can be useful : -\\[1 mm ] \rule{40mm}{0 mm } -\frac{\lambda}{2\left|\lambda\right| } e^{\frac{\sigma_g^2}{2\lambda^2}-\frac{x - x_g}{\lambda}}\left [ 1-\mathrm{erf}\left(\frac{\sigma_g}{\sqrt{2}\left|\lambda\right|}- \frac{\left(x - x_g\right)\left|\lambda\right|}{\sqrt{2}\sigma_g\lambda } \right)\right ] .\end{array}\ ] ] usage of to fit the distributions would be more convenient , if the free parameter is location of function maximum , instead of .equation for the search for looks rather complicated : \end{array}\ ] ] or in other notations where . for : } \longrightarrow z_m\approx -\sqrt{\ln\frac{\rho}{2-\frac{2}{\rho\sqrt{\left|\ln\frac{\rho}{2}\right| } \sqrt{\pi}}\cdot\left [ 1-\frac{1}{2\ln\frac{\rho}{2}}\right ] } } \longrightarrow \\[7mm]\rule{5mm}{0mm}\longrightarrow -\frac{\left(x_m - x_g\right)\lambda}{\sigma_g\left|\lambda\right|\sqrt{2 } } + \frac{\sigma_g}{\left|\lambda\right|\sqrt{2}}= -\sqrt{\ln\frac{\left|\lambda\right|}{\sigma_g\sqrt{2\pi } } } \end{array}\ ] ] or {c } \approx \\[-2 mm ] \scriptstyle \sigma_g\ll |\lambda| \end{array } \frac{\sigma_g^2}{\lambda}+\frac{\sigma_g\left|\lambda\right|\sqrt{2}}{\lambda}\cdot \sqrt{\ln\frac{\left|\lambda\right|}{\sigma_g\sqrt{2\pi } } } \to 0.\ ] ] for , and here we also can derive approximate solution : = \rho\exp\left(-z_m^2\right)\ ] ] or from ( [ eq : peakcondition ] ) one can derive more terms of taylor series : {c } \approx \\[-2 mm ]\scriptstyle |\lambda| \ll \sigma_g \end{array } \lambda\cdot\left[1-\left(\frac{\lambda}{\sigma_g}\right)^2 + \left(\frac{\lambda}{\sigma_g}\right)^4+\ldots \right].\ ] ] let us return to the equation ( [ eq : maxfb1 ] ) , which should be solved in order to find maximum location .let us transform the interval of variable to the interval : at the ends of the interval we know the solution : \mu \sim 1\longrightarrow z_m\approx \frac{\sqrt{\pi}\ln\mu}{2}-\frac{1}{\sqrt{\pi}\ln\mu } \end{array}\ ] ] let us look for approximating function in the form : function plot and the approximating cubic spline with 5 knots are presented in fig.[fpapproximation ] .= 0.7 root mean square deviation equals , maximum error of interpolation is achieved at .spline coefficients are cited in table[tab : fpspline ] ..[tab : fpspline]spline of deficiency 2 coefficients for approximation of function . [ cols="<,^,^,^,^,^",options="header " , ] now we can calculate the shift of peak position vs gaussian center : the function can be easily implemented in any programming language , using the above formulae .in some cases the function itself can be suitable for fitting .however more often this function is not enough flexible to provide satisfactory confidence level with experimental distribution .one could try to use the function which is the convolution of the three distributions : gaussian and two different exponential .such a function could be useful if the distribution `` tails '' both to the right and left from the peak do not match gaussian distribution .however in this case and in other difficult ones the distributions are fitted more successfully to the sum of different functions with the common parameter peak position . for the sum of two functions one can use the following expression : \rule{51mm}{0mm}+ \sin^2\xi\cdot f_{b1}\left(x;x_m-\delta x_{mg}(\sigma_2,\lambda_2),\sigma_2,\lambda_2\right ) \end{array}\ ] ] sample of using such a function is presented in fig.[fitetram0acfb1 ] (one more parameter is added common factor ) .= 0.49 = 0.49 in principle for complicated cases one can use the sum of more functions .however for so many free parameters the likelihood function minimization can be unstable , and one need to help minuit program .the simplest and enough effective trick is optimization of parameters in turn , initially fixed at some reasonable values .for fitting the smooth distributions with one peak a function is suggested , which is the convolution of gaussian and exponential distributions . | in the paper a new fitting function is suggested , which can essentially increase the existing instrumentation for fitting of asymmetric peaks with the only maximum . |
in this paper , we study the following hypothesis testing problem introduced by .one observes an -dimensional vector .the null hypothesis is that the components of are independent and identically distributed ( i.i.d . )standard normal random variables .we denote the probability measure and expectation under by and , respectively . to describe the alternative hypothesis , consider a class of sets of indices such that for all .under , there exists an such that where is a positive parameter .the components of are independent under as well .the probability measure of defined this way by an is denoted by .similarly , we write for the expectation with respect to . throughout , we will assume that every has the same cardinality .a test is a binary - valued function .if then we say that the test accepts the null hypothesis , otherwise is rejected .one would like to design tests such that is accepted with a large probability when is distributed according to and it is rejected when the distribution of is for some . following , we consider the risk of a test measured by this measure of risk corresponds to the view that , under the alternative hypothesis , a set is selected uniformly at random and the components of belonging to have mean . in the sequel , we refer to the first and second terms on the right - hand side of ( [ eq : bayes_risk ] ) as the type i and type ii errors , respectively .we are interested in determining , or at least estimating the value of under which the risk can be made small .our aim is to understand the order of magnitude , when is large , as a function of , , and the structure of , of the value of the smallest for which risk can be made small .the value of for which the risk of the best possible test equals is called _ critical_. typically , the components of represent weights over the edges of a given graph and each is a subgraph of .when then the edge is `` contaminated '' and we wish to test whether there is a subgraph in that is entirely contaminated . in , two examples were studied in detail . in one case, contains all paths between two given vertices in a two - dimensional grid and in the other is the set of paths from root to a leaf in a complete binary tree . in both cases ,the order of magnitude of the critical value of was determined . investigate another class of examples in which elements of correspond to clusters in a regular grid .both and describe numerous practical applications of problems of this type .some other interesting examples are when is : * the set of all subsets of size ; * the set of all cliques of a given size in a complete graph ; * the set of all bicliques ( i.e. , complete bipartite subgraphs ) of a given size in a complete bipartite graph ; * the set of all spanning trees of a complete graph ; * the set of all perfect matchings in a complete bipartite graph ; * the set of all sub - cubes of a given size of a binary hypercube .the first of these examples , which lacks any combinatorial structure , has been studied in the rich literature on multiple testing ; see , for example , , , and the references therein . as pointed out in , regardless of what is, one may determine explicitly the test minimizing the risk .it follows from basic results of binary classification that for a given vector , , if and only if the ratio of the likelihoods of under and exceeds . writing and for the probability densities of and , respectively , the likelihood ratio at is where .thus , the optimal test is given by the risk of ( often called the bayes risk ) may then be written as we are interested in the behavior of as a function of and .clearly , is a monotone decreasing function of .( this fact is intuitively clear and can be proved easily by differentiating with respect to . ) for sufficiently large , is close to zero while for very small values of , is near its maximum value , indicating that testing is virtually impossible .our aim is to understand for what values of the transition occurs .this depends on the combinatorial and geometric structure of the class .we describe various general conditions in both directions and illustrate them on examples . also consider the risk measure clearly , and when there is sufficient symmetry in and , we have equality .however , there are significant differences between the two measures of risk .the alternative measure obviously satisfies the following monotonicity property : for a class and parameter , let denote the smallest achievable risk . if are two classes then for any , .in contrast to this , the `` bayesian '' risk measure does not satisfy such a monotonicity property as is shown in section [ nonmonotone ] . in this paper, we focus on the risk measure . throughout the paper we assume , for simplicity , that each set has the same cardinality .we do this partly in order to avoid technicalities that are not difficult but make the arguments less transparent . at the same time , in many natural examples this condition is satisfied . if may contain sets of different size such that all sets have approximately the same number of elements , then all arguments go through without essential changes . however , if contains sets of very different size then the picture may change because large sets become much easier to detect and small sets can basically be ignored .another approach to handle sets of different size , adopted by , is to change the model of the alternative hypothesis such that the level of contamination is appropriately scaled depending on the size of the set .the paper is organized as follows . in section [ simpletests ] , we briefly discuss two suboptimal but simple and general testing rules ( the _ maximum test _ and the _ averaging test _ ) that imply sufficient conditions for testability that turn out to be useful in many examples . in section [ lowerbounds ] ,a few general sufficient conditions are derived for the impossibility of testing under symmetry assumptions for the class . in section[ examples ] , we work out several concrete examples , including the class of all -sets , the class of all cliques of a certain size in a complete graph , the class of all perfect matchings in the complete bipartite graph and the class of all spanning trees in a complete graph . in section [ nonmonotone ] ,we show that , perhaps surprisingly , the optimal risk is not monotone in the sense that larger classes may be significantly easier to test than small ones , though monotonicity holds under certain symmetry conditions . in the last two sections of the paper , we use techniques developed in the theory of gaussian processes to establish upper and lower bounds related to geometrical properties of the class . in section[ hellinger ] , general lower bounds are derived in terms of random subclasses and metric entropies of the class .finally , in section [ typeone ] we take a closer look at the type i error of the optimal test and prove an upper bound that , in certain situations , is significantly tighter than the natural bound obtained for a general - purpose maximum test .as mentioned in the , the test minimizing the risk is explicitly determined .however , the performance of this test is not always easy to analyze . moreover ,efficient computation of the optimal test is often a nontrivial problem though efficient algorithms are available in many interesting cases .( we discuss computational issues for the examples of section [ examples ] . ) because of these reasons , it is often useful to consider simpler , though suboptimal , tests . in this section ,we briefly discuss two simplistic tests , a test based on averaging and a test based on maxima .these are often easier to analyze and help understand the behavior of the optimal test as well . in many cases ,one of these tests turn out to have a near - optimal performance .perhaps the simplest possible test is based on the fact that the sum of the components of is zero - mean normal under and has mean under the alternative hypothesis .thus , it is natural to consider the _ averaging test _ [ average ]let .the risk of the averaging test satisfies whenever observe that under , the statistic has normal distribution while for each , under , it is distributed as .thus , .another natural test is based on the fact that under the alternative hypothesis for some , is normal .consider the _ maximum test _ the test statistic is often referred to as a _ scan statistic _ and has been thoroughly studied for a wide range of applications ; see . here , we only need the following simple observation .[ maxtest ] the risk of the maximum test satisfies whenever in the analysis , it is convenient to use the following simple gaussian concentration inequality ; see .[ tsirelson ] let be an vector of independent standard normal random variables .let denote a lipschitz function with lipschitz constant ( with respect to the euclidean distance ) .then for all , proof of proposition [ maxtest ] simply note that under the null hypothesis , for each , is a zero - mean normally distributed random variable with variance . since is a lipschitz function of with lipschitz constant , by tsirelson s inequality , for all , on the other hand , under for a fixed , and therefore which completes the proof .the maximum test is often easier to compute than the optimal test , though maximization is not always possible in polynomial time .if the value of is not exactly known , one may replace it in the definition of by any upper bound and then the same upper bound will appear in the performance bound .proposition [ maxtest ] shows that the maximum test is guaranteed to work whenever is at least .thus , in order to better understand the behavior of the maximum test ( and thus obtain sufficient conditions for the optimal test to have a low risk ) , one needs to understand the expected value of ( under ) .as the maximum of gaussian processes have been studied extensively , there are plenty of directly applicable results available for expected maxima .the textbook of is dedicated to this topic . here, we only recall some of the basic facts .first , note that one always has but sharper bounds can be derived by chaining arguments ; see for an elegant and advanced treatment .the classical chaining bound of works as follows . introduce a metric on by where denotes the hamming distance . for ,let denote the -covering number of with respect to the metric , that is , the smallest number of open balls of radius that cover .by dudley s theorem , there exists a numerical constant such that where denotes the diameter of the metric space .note that since for all , .dudley s theorem is not optimal but it is relatively easy to use .dudley s theorem has been refined , based on `` majorizing measures , '' or `` generic chaining '' which gives sharp bounds ; see , for example , .[ rmk : vc ] in certain cases , it is convenient to further bound dudley s inequality in terms of the vc dimension ; see .recall that the vc dimension of is the largest positive integer such that there exists an -element set such that for all subsets there exists an such that . proved that the covering numbers of may be bounded as so by dudley s bound , an interesting alternative to the maximum test , proposed and investigated by and , is based on the idea that under the null hypothesis the distribution of the vector does not change if the sign of each component is changed randomly , while under the alternative hypothesis the distribution changes . in and ,methods based on symmetrization and bootstrap are suggested and analyzed .such tests are meaningful and interesting in the setup of the present paper as well and it would be interesting to analyze their behavior .in this section , we investigate conditions under which the risk of any test is large .we start with a simple universal bound that implies that regardless of what the class is , small risk can not be achieved unless is substantially large compared to .an often convenient way of bounding the bayes risk is in terms of the bhattacharyya measure of affinity [ ] it is well known [ see , e.g. , , theorem 3.1 ] that thus , essentially behaves as the bayes error in the sense that is near when is near , and is small when is small . observe that , by jensen s inequality , straightforward calculation shows that for any , and therefore we have the following .[ universal ] for all classes , whenever .this shows that no matter what the class is , detection is hopeless if is of the order of .this classical fact goes back to .the next lemma is due to . for completeness, we recall their proof .[ pairs ] let and be drawn independently , uniformly , at random from and let . then as noted above , by the cauchy schwarz inequality , since , -1.\ ] ] however , by definition , so we have = \frac{1}{n^2 } \sum_{s , s'\in{\mathcal{c } } } e^{-k \mu^2 } \mathbb{e}_0 e^{\mu(x_s+x_{s'})}.\ ] ] but \\ & = & ( \mathbb{e}_0 e^{\mu x } ) ^{2(k-|s\cap s'| ) } ( \mathbb{e}_0 e^{2\mu x } ) ^{|s\cap s'| } \\ & = & e^{\mu^2 ( k-|s\cap s'|)+2\mu^2|s\cap s'|},\end{aligned}\ ] ] and the statement follows .the beauty of this proposition is that it reduces the problem to studying a purely combinatorial quantity . by deriving upper bounds for the moment generating function of the overlap between two elements of drawn independently and uniformly at random ,one obtains lower bounds for the critical value of .this simple proposition turns out to be surprisingly powerful as it will be illustrated in various applications below .we begin by deriving some simple consequences of proposition [ pairs ] under some general symmetry conditions on the class .the following proposition shows that the universal bound of proposition [ universal ] can be improved by a factor of for all sufficiently symmetric classes .[ symmetric ] let .assume that satisfies the following conditions of symmetry .let be drawn independently and uniformly at random from .assume that : the conditional distribution of given is identical for all values of ; for any fixed and , .then for all with we apply proposition [ pairs ] . by the first symmetry assumption , it suffices to derive a suitable upper bound for =\mathbb{e } [ e^{\mu^2 z}| s'] ] for the indicated values of .because of symmetry , = \mathbb{e } [ \exp(\mu^2z ) \vert s ' ] $ ] for all and therefore we might as well fix an arbitrary clique . if denotes the number of vertices in the clique , then .moreover , the distribution of is hypergeometrical with parameters and . if is a binomial random variable with parameters and ,then since is a convex function of , an inequality of implies that = \mathbb{e } [ e^{\mu^2y^2/2 } ] \le\mathbb{e } [ e^{\mu^2b^2/2 } ] .\ ] ] thus , it remains to derive an appropriate upper bound for the moment generating function of the squared binomial . to this end , let be a parameter whose value will be specified later . using and the cauchy schwarz inequality , it suffices to show that \cdot\mathbb{e}\bigl[\exp\bigl(\mu^2kb\mathbh{1}_{\ { b > c{k^2}/{m } \ } } \bigr ) \bigr ] \le4.\ ] ] we show that , if satisfies the condition of ( ii ) , for an appropriate choice of , both terms on the left - hand side are at most .the first term on the left - hand side of ( [ product ] ) is = \biggl ( 1 + \frac{k}{m } \biggl(\exp\biggl(\mu^2c\frac{k^2}{m } \biggr ) -1 \biggr ) \biggr)^k,\ ] ] which is at most if and only if since , this is implied by to bound the second term on the left - hand side of ( [ product ] ) , note that & \le & 1 + \mathbb{e}\bigl[\mathbh{1}_{\ { b > c{k^2/m } \}}\exp(\mu^2 kb ) \bigr ] \\ & \le & 1 + \biggl({\mathbb{p}}\biggl\{b > c\frac{k^2}{m } \biggr\ } \biggr)^{1/2 } ( \mathbb{e}[\exp(\mu^2 kb ) ] ) ^{1/2},\end{aligned}\ ] ] by the cauchy schwarz inequality , so it suffices to show that \le1.\ ] ] denoting , chernoff s bound implies on the other hand , = \biggl ( 1 + \frac{k}{m}\exp(\mu^2k ) \biggr)^k,\ ] ] and therefore the second term on the left - hand side of ( [ product ] ) is at most whenever using , we obtain the sufficient condition summarizing , we have shown that for all satisfying choosing [ which is greater than for , the second term on the right - hand side is at most .now observe that since is convex , for any , .choosing , the first term is at least where we used the condition that and that for all . a closely related problem arising in the exploratory analysis of microarray data [ see ]is when each member of represents the edges of a biclique of the complete bipartite graph where .( a biclique is a complete bipartite subgraph of . )the analysis and the bounds are completely analogous to the one worked out above , the details are omitted .intuitively , one would expect that the testing problem becomes harder as the class gets larger .more precisely , one may expect that if are two classes of subsets of , then holds for all .the purpose of this section is to show that this intuition is wrong in quite a strong sense as not only such general monotonicity property does not hold for the risk , but there are classes for which is arbitrary close to and is arbitrary close to for the same value of .however , monotonicity does hold if the class is sufficiently symmetric .call a class _ symmetric _ if for the optimal test the value of is the same for all .note that several of the examples discussed in section [ examples ] satisfy the symmetry assumption , such as the classes of -sets , stars , perfect matchings , and cliques .however , the class of spanning trees is not symmetric in the required sense .[ symmetricclass ] let be a symmetric class of subsets of .if is an arbitrary subclass of , then for all , . in this proof , we fix the value of and suppress it in the notation .recall the definition of the alternative risk measure which is to be contrasted with our main risk measure the risk is obviously monotone in the sense that if then for every , .let and denote the optimal tests with respect to both measures of risk .first , observe that if is symmetric , then .but since for every , we have this means that all inequalities are equalities and , in particular , .now if is an arbitrary subclass of , then which completes the proof. for every there exist , , and classes such that and .we work with distances . for any class , denote . recall that given , we fix an integer large enough that and that and let .we let consist of disjoint subsets of , each of size .we let consist of all sets of the form , where ranges from to , and assume has been chosen so that .we then let .we take so that , as seen in section [ canonical ] , we have . we will require an upper bound on , which we obtain by considering the averaging test on variables , just as in proposition [ average ] , we have whenever , which is indeed the case by our choices of and .it follows that .we remark that we let ; then , and note thus , . observe that nonmonotonicity of the bhattacharyya affinity also follows from the same argument . to this end, we may express in function of the hellinger distance as .recalling [ see , e.g. , , page 225 ] that we see that the same example as in the proof above , for large enough , shows the nonmonotonicity of the bhattacharyya affinity as well .in this section , we derive lower bounds for the bayes risk .the bounds are in terms of some geometric features of the class .again , we treat as a metric space equipped with the canonical distance [ i.e. , the square root of the hamming distance . for an integer , we define a real - valued parameter of the class as follows .let be obtained by choosing elements of at random , without replacement .let the random variable denote the smallest distance between elements of and let be a median of .[ randomclassbound ] let be an integer .then for any class , whenever to interpret the statement of the theorem , note that latexmath:[\[k-\tau^2/2 = { \mathop{\max_{s , t\in{\mathcal{a}}}}_{s\neq t } } .thus , just like in proposition [ pairs ] , the distribution of the overlap between random elements of plays a key role in establishing lower bounds for the optimal risk .however , while in proposition [ pairs ] the moment generating function of the overlap between two random elements determines an upper bound for the critical value of , here it is the median of the largest overlap between many random elements that counts .the latter seems to carry more information about the fine geometry of the class .in fact , invoking a simple union bound , upper bounds for may be used together with theorem [ randomclassbound ] . in applications ,often it suffices to consider the following special case .[ medianzero ] let be the largest integer for which zero is a median of where is a random subset of of size [ i.e. , .then for all .to illustrate the corollary , consider the following example which is the simplest in a family of problems investigated by : assume that and are both perfect squares and that the indices are arranged in a grid .the class contains all sub - squares .now if and are randomly chosen elements of ( with or without replacement ) then , if , and therefore which is at least if in which case .thus , by corollary [ medianzero ] , for all .this bound is of the optimal order of magnitude as it is easily seen by an application of proposition [ maxtest ] . in some other applications ,a better bound is obtained if some overlap is allowed . a case in pointis the example of stars from section [ stars ] . in that case , any two elements of overlap but by taking , we have , so theorem [ randomclassbound ] still implies whenever . the main tool of the proof of theorem [ randomclassbound ] is slepian s lemma which we recall here [ ] .[ for this version , see , theorem 3.11 . ] [ slepian ] let be zero - mean gaussian vectors such that for each , let be such that for all and , then .proof of theorem [ randomclassbound ] let be fixed and choose sets from uniformly at random ( without replacement ) .let denote the random subclass of obtained this way .denote the likelihood ratio associated to this class by where .then the optimal risk of the class may be lower bounded by denoting by expectation with respect to the random choice of , we have } \nonumber\\ \eqntext{\mbox{(since the variance of a sample without replacement } } \\ \eqntext{\mbox{is less than that with replacement ) } } \\ & = & \widehat{\mathbb{e } } r^*_{\mathcal{a}}(\mu ) - \frac{1}{2\sqrt{m } } \sqrt { \frac{1}{n } \sum_{t\in{\mathcal{c } } } \mathbb{e}_0 \biggl ( v_t - \frac{1}{n } \sum_{s\in{\mathcal{c } } } v_s \biggr)^2}.\nonumber\end{aligned}\ ] ] an easy way to bound the right - hand side is by writing summarizing , we have where we used the assumption that .thus , it suffices to prove that .we bound the optimal risk associated with in terms of the bhattacharyya affinity recalling from section [ lowerbounds ] that and using that is concave , we have therefore , it suffices to show that the expected bhattacharyya affinity corresponding to the random class satisfies in the argument below , we fix the random class , relabel the elements so that , and bound from below .denote the minimum distance between any two elements of by . to bound , we apply slepian s lemma with the function where .simple calculation shows that the mixed second partial derivatives of are negative , so slepian s lemma is indeed applicable .next , we introduce the random vectors and .let the components of be indexed by elements and define .thus , under , each is normal and is just the bhattacharyya affinity . to define the random vector ,introduce independent standard normal random variables : one variable for each and an extra variable . recall that the definition of guarantees that the minimal distance between any two elements of as at least .now let then clearly for each , and ( ) . on the other hand , and therefore , by slepian s lemma , .however , to finish the proof , it suffices to observe that the last expression is the bhattacharyya affinity corresponding to a class of disjoint sets , all of size , of cardinality .this case has been handled in the first example of section [ examples ] where we showed that where again we used the condition and the fact that . therefore , under this condition on , we have that for any fixed , and therefore where is the median of .this concludes the proof . at the risk of losing a constant factor in the statement of theorem [ randomclassbound ], one may replace the parameter by a larger quantity .the idea is that by thinning the random subclass one may consider a subset of that has better separation properties .more precisely , for an even integer we may define a real - valued parameter of the class as follows .let be obtained by choosing elements of at random , without replacement .order the elements of such that and define the subset by . let the random variable denote the smallest distance between elements of and let be the median of .it is easy to see that the proof of theorem [ randomclassbound ] goes through , and one may replace by ( by adjusting the constants appropriately ) .one simply needs to observe that since each is nonnegative , if is significantly larger than , the gain may be substantial .if the class is symmetric then thanks to theorem [ symmetricclass ] , the theorem above can be improved and simplified . if the class is symmetric , instead of having to work with randomly chosen subclasses , one may optimally choose a separated subset .then the bounds can be expressed in terms of the metric entropy of , more precisely , by its _ packing numbers _ with respect to the canonical distance .we say that is a -separated set ( or -packing ) if for any , . for , define the _ packing number _ as the size of a maximal -separated subset of .it is a simple well - known fact that packing numbers are closely related to the covering numbers introduced in section [ simpletests ] by the inequalities .let be symmetric in the sense of theorem [ symmetricclass ] and let .then whenever let be a maximal -separated subclass . since is symmetric , by theorem [ symmetricclass ] , so it suffices to show that for the indicated values of .the rest of the proof is identical to that of theorem [ randomclassbound ] . to interpret this result ,take for some .then , by the theorem , as an example , suppose that the class is such that there exists a constant such that .( recall that all classes with vc dimension have an upper bound of this form for the packing numbers , see remark on page . ) in this case , one may choose and obtain that whenever ( for some constant ) .this closely matches the bound obtained for the maximum test by dudley s chaining bound .in all examples considered above , upper bounds for the optimal risk are derived by analyzing either the maximum test or the averaging test .as the examples show , very often these simple tests have a near - optimal performance .the optimal test is generally more difficult to study . in this section ,we analyze directly the performance of the optimal test .more precisely , we derive general upper bounds for the type i error ( i.e. , the probability that the null hypothesis is rejected under ) of .the upper bound involves the expected value of the maximum of a gaussian process indexed by a sparse subset of and can be significantly smaller than the maximum over the whole class that appears in the performance bound of the maximum test in proposition [ maxtest ] .unfortunately , we do not have an analogous bound for the type ii error .we consider the type i error of the optimal test an easy bound is so thus , whenever . of course, we already know this from proposition [ maxtest ] where this bound was derived for the ( suboptimal ) test based on maxima . in order to understand the difference between the performance of the optimal test and the maximum test, one needs to compare the random variables and .[ typeonebound ] for any , the type error of the optimal test satisfies whenever where is any -cover of . if is a minimal -cover of , then by `` sudakov s minoration '' [ see , theorem 3.18 ] this upper bound is sharp up to a constant factor .it is instructive to compare this bound with that of proposition [ maxtest ] for the performance of the maximum test . in proposition[ typeonebound ] , we were able to replace the expected maximum by where now the maximum is taken over a potentially much smaller subset .it is not difficult to construct examples when there is a substantial difference , even in the order of magnitude , between the two expected maxima so we have a genuine gain over the simple upper bound of proposition [ maxtest ] .unfortunately , we do not know if an analog upper bound holds for the type ii error of the optimal test . in cases when , we suspect that the maximum test is far from optimal .however , to verify this conjecture , one would need a similar analysis for the type ii error as well .proof of proposition [ typeonebound ] introduce the notation then we use tsirelson s inequality ( lemma [ tsirelson ] ) to bound this probability . to this end, we need to show that the function defined by is lipschitz [ where . observing that we have andtherefore is indeed lipschitz . by tsirelson s inequality, we have thus , the type i error is bounded by if it remains to bound .let be a positive integer and consider a minimal -cover of the set , that is , a set with cardinality such that , if denotes an element in whose distance to is minimal then for all .then clearly , to bound the first term on the right - hand side , note that , by jensen s inequality , since for each , and therefore is a centered normal random variable with variance . for the second term, we have choosing , we obtain the proposition .we thank ery arias - castro and emmanuel cands for discussions on the topic of the paper .we also thank the referees for their valuable remarks .tsirelson , b. s. , ibragimov , i. a. and sudakov , v. n. ( 1976 ) .norm of gaussian sample function . in _ proceedings of the 3rd japan symposium on probability theory_. _ lecture notes in math . _ * 550 * 2041 .springer , berlin . | we study a class of hypothesis testing problems in which , upon observing the realization of an -dimensional gaussian vector , one has to decide whether the vector was drawn from a standard normal distribution or , alternatively , whether there is a subset of the components belonging to a certain given class of sets whose elements have been `` contaminated , '' that is , have a mean different from zero . we establish some general conditions under which testing is possible and others under which testing is hopeless with a small risk . the combinatorial and geometric structure of the class of sets is shown to play a crucial role . the bounds are illustrated on various examples . , , + and . . |
loop corrections modify the coulomb potential : electron loop insertion into the photon propagator leads to the uehling serber correction to the electric potential of point - like nuclei . though it leads to an important phenomenon contributing to the lamb shift of the energies of atomic electrons numerically the shift being of the order of is small ( we are using gauss system of units , where and in all formulas is implied ) . analogous correction in the case of external magnetic fieldqualitatively change the behavior of atomic energies : in particular the energy of the ground level remains finite in the limit ; also spontaneous production of positrons becomes possible only for nuclei with . without taking radiative corrections into account in the limit of infinite magnetic field energy of ground atomic level tends to minus infinity and point - like nucleus with any becomes critical at large enough . at magnetic fields the characteristic size of the electron wave function in the transverse to the magnetic field direction ( the so - called landau radius ) becomes smaller than bohr radius making the coulomb problem essentially one - dimensional .singularity of the coulomb potential in is stronger than in . in energy of the ground level is unbounded from below : the `` fall to the center '' phenomenon occurs . in the case of external magnetic fieldthe singularity is cured by the finite value of : at the coulomb problem remains three dimensional .this is the reason why ground level goes down when grows . at superstrong magnetic fields radiative corrections screen the coulomb potential at short distances and the freezing of a ground state energy occurs : it remains finite at ( , , ) .for the value of freezing energy is below , so the ground level enters lower continuum when increases and spontaneous production of pair from vacuum becomes energetically possible and thus takes place . in this process electron occupies ground level while positron is emitted to infinity . for freezing energyis above and spontaneous positron production does not occur .there exists the direct correspondence between radiative corrections to the coulomb potential in case in strong magnetic field and radiative corrections to the coulomb potential in qed .that is why we start our presentation ( in section 2 ) from the analysis of the coulomb potential in qed of massive fermions . when these fermions are light , , the exponential screening of the coulomb potential at short distances occurs . in the limit ( massless qed , the so - called schwinger model )this exponential screening occurs at all distances because photon gets mass , . in section 3we analyze radiative corrections to the coulomb potential in qed in external magnetic field .the role of the coupling constant here plays the product , and for the screening of the coulomb potential occurs as well . in section 4 the structure of atomic levels on which the lowest landau level ( lll ) in the presence of atomic nucleus splits is determined . in section 5the dirac equation for hydrogenlike ion at superstrong magnetic field will be derived and effect of screening will be studied for . in section 6the influence of the screening of the coulomb potential on the values of critical nuclei charges is discussed . in section 7the obtained results are summarized .let us finish the introduction discussing the numerical values of magnetic fields we are dealing with in these lectures .the magnetic field at which the bohr radius of a hydrogen atom becomes equal to landau radius is gauss , which is much larger than a magnetic field ever made artificially on earth : gauss .an interest to the atomic spectrum in the magnetic fields was triggered by the experiments with semiconductors , where electron - hole bound system called exciton is formed .both effective charge and mass of electrons in semiconductors are much lower than in vacuum making in kilogauss scale reachable .the so - called schwinger magnetic field gauss and magnetic field at which the screening of the coulomb potential occurs gauss should be compared with the magnetic fields at pulsars gauss and magnetars gauss .although the application of the results obtained in the condensed matter physics ( say graphen , where the mass of charge carrier can be arbitrary low while the value of charge approach one ) can not be excluded , our main interest in the problem considered is purely theoretical .summing up diagrams shown in fig .1 we get the following formula for the potential of point - like charge : where is the one - loop expression for the photon polarization operator .it can be obtained from the textbook expression for the polarization operator calculated with the help of dimensional regularization in the limit : \equiv -4g^2 p(t ) \;\ ; , \label{2}\ ] ] where = $ ] mass .fig . 1 . _ modification of the coulomb potential due to the dressing of the photon propagator ._ let us note that is finite though corresponding integral is divergent in ultraviolet .the point is that the trace of gamma matrices which multiplies divergent integral is zero . in dimensionalregularization the trace is proportional to while ultraviolet divergency of integral over virtual momentum produces the factor and the product of these two factors is finite . in order to obtain an expression for the coulomb potential in the coordinate representation we take andmake the fourier transformation : the potential energy for the charges and is the integral in ( [ 3 ] ) can not be expressed through elementary functions. however it appears possible to find an interpolating formula for which has good accuracy and is simple enough for the fourier transformation to be performed analytically .the asymptotics of are : let us take as an interpolating formula for the following expression : the accuracy of this approximation is not worse than 10% for the whole interval of variation , .substituting an interpolating formula in ( [ 3 ] ) we get : e^{ik_\parallel z } \frac{dk_\parallel}{2\pi } = \\ & = & \frac{4\pi g}{1 + 2g^2/3m^2}\left[-\frac{1}{2}|z| + \frac{g^2/3m^2}{\sqrt{6m^2 + 4g^2 } } { \rm exp}(-\sqrt{6m^2 + 4g^2}|z|)\right ] \;\ ; .\nonumber \label{7}\end{aligned}\ ] ] in the case of heavy fermions ( ) the potential is given by the tree level expression ; the corrections are suppressed as . in the case of light fermions ( ) : corresponds to schwinger model ; photon gets mass .light fermions make transition from to continuous . in the case of light fermions the coulomb potential in qedis screened at distances . in fig .2 the potential energy for is shown .it is normalized to . fig.2 ._ potential energy of the charges and in .the solid curve corresponds to ; the dashed curve corresponds to . _in order to find the potential of a point - like charge we need an expression for photon polarization operator in the external magnetic field . long ago an expression for the electron propagator in constant and homogeneous external magnetic field was found by schwinger as a parametric integral . for integrationcan be easily performed and compact expression for follows . using it one obtains an analytic expression for a photon polarization operator( see for example ) . to understand the reason for great simplification of the expression for the electron propagator in the limit one should start from the spectral representation of the propagator .the solutions of dirac equation in the homogeneous constant in time are known , so one can write the spectral representation of the electron green function .the denominators contain , and for and in sum over levels the lowest landau level ( lll , ) dominates . in the coordinate representationthe transverse part of lll wave function is : which in the momentum representation gives ( we suppose that is directed along the axis ) . substituting the electron green functions we get the expression for the polarization operator in superstrong . for , the following expression is valid : with the help of it the following result was obtained in : \;\ ; . \label{11}\end{aligned}\ ] ] for the magnetic fields the potential is coulomb up to small power suppressed terms : \label{12}\ ] ] in full accordance with the case with the substitution . in the opposite case of the superstrong magnetic fields we get : the coulomb potential is screened at short distances . in fig .3 the plot of a modified by the superstrong magnetic field coulomb potential as well as its short- and long - distance asymptotics are presented .3 . _ the modified coulomb potential at g ( blue , dark solid ) and its long distance ( green , pale solid ) and short distance ( red , dashed ) asymptotics ._ let us find the 3-dimensional shape of the screened coulomb potential .the behavior of the potential in the transverse plane ( ) can be found analytically in the limit from the general expression : neglecting exponent in the denominator , which is valid for : and the coulomb potential is screened at large in complete analogy with the case . for in the integral ( [ 141 ] )the values dominate and we get : fig .4 . _ the equipotential lines at . the dashed line corresponds to _ in fig .4 the equipotential lines are shown .the behavior of the screened coulomb potential in the transverse plane was found numerically in . finally for expanding ( [ 143 ] )we get : which coincides with the result obtained in where the expression for the photon polarization operator at was obtained as well .the expression for the screened coulomb potential was obtained from the one - loop contribution to the photon polarization operator in the external magnetic field . in momentum spaceit looks like : if higher loops contain the terms they will drastically change the shape of the potential in the coordinate space . to calculate the radiative corrections one should use the electron propagator in an external homogeneous magnetic field .the spectral representation of the electron propagator is a sum over landau levels and for the contribution of the lowest level dominates : where the projector selects the virtual electron state with its spin oriented opposite to the direction of the magnetic field .the contributions of the excited landau levels to yield a term in the denominator proportional to and they produce a correction of order in the denominator of ( [ 188 ] ) .two kinds of terms contribute to the polarization operator at the two - loop level .first , there are terms in the electron propagators which represent the contributions of higher landau levels . just likein the one - loop case they produce corrections suppressed as in the denominator of ( [ 188 ] ) , i.e. terms of the order which can be safely neglected in comparison with the leading term .second , there is the contribution from the leading term in the electron propagator , given by ( [ 189 ] ) .let us consider the simplest diagram : the photon dressing of the electron propagator . neglecting the electron mass we get : = -2\hat k_{0,3}(1+i\gamma_1 \gamma_2 ) \;\ ; , \label{190}\ ] ] which gives zero when multiplying external propagator of electron , since .this result is a manifestation of the following well - known fact : in massless qed in ( schwinger model ) all loop diagrams are zero except the one - loop term in the photon polarization operator ( see for example ) .that is why the contributions of the second kind are of the order of and they are not important .the generalization of the above arguments to higher loops is straightforward .let us note that absence of higher loop corrections to polarization operator in schwinger model is related to the absence of renormalization of axial anomaly by higher loops . in anomalyis given by correlator of two currents and axial current is proportional to vector current ( see for example ) .we are interested in the spectrum of a hydrogen - like ion in a very strong magnetic field .we will write all formulas for hydrogen since their generalization for is straightforward . in the absence of magnetic field the spatial size of the wave function of the ground state atomic electronis characterized by the bohr radius , its energy equals , where is the rydberg constant .the transverse ( with respect to ) size of the ground state of the electron wave function in an external magnetic field is characterized by the landau radius .the larmour frequency of the electron precession is . for a magnetic field gauss called `` atomic magnetic field '' , these sizes and energies are close to each other : , .we wish to study the spectrum of the hydrogen atom in magnetic fields much larger than . in this casethe motion of the electron is mainly controlled by the magnetic field : it makes many oscillations in this field before it makes one in the coulomb field of the nucleus .this is the condition for applicability of the adiabatic approximation , used for this problem for the first time in .the spectrum of a dirac electron in a pure magnetic field is well known ; it admits a continuum of energy levels due to the free motion along the field : where ; is the spin projection of the electron on axis multiplied by two . for magnetic fields larger than ,the electrons are relativistic with only one exception : electrons belonging to the lowest landau level ( lll , , ) can be non - relativistic . inwhat follows we will study the spectrum of electrons from lll in the coulomb field of the proton modified by the superstrong .the solution can be found in of the schrdinger equation for an electron in a constant in time homogeneous magnetic field in the gauge in which in cylindrical coordinates ( ) .the electron energies are : where is the number of nodal surfaces , is the electron orbital momentum projection on the axis ( direction of the magnetic field ) and . according to ,the lll wave functions are : ^{-1/2 } \rho^{|m|}e^{\left(im\varphi - \rho^2/(4a_h^2)\right ) } \;\ ; , \label{17}\ ] ] we should now take into account the electric potential of the atomic nucleus located at . for the adiabatic approximation can be used and the wave function should be looked for in the following form : where is the solution of a schrdinger equation for an electron motion along the direction of the magnetic field : \chi_n(z ) = e_n \chi_n(z ) \;\ ; . \label{19}\ ] ] without screening the effective potential is given by the following formula : which becomes the coulomb potential for and is regular at to take screening into account we must use ( [ 11 ] ) to modify ( [ 20 ] ) ( see below ) . since ,the wave functions are odd or even under reflection ; the ground states ( for ) are described by even wave functions . in fig .5 the different scales important in the consideration of the hydrogen atom in strong magnetic field are shown ._ landau radius versus magnetic field ._ to calculate the ground state of hydrogen atom in the shallow - well approximation is used : ^ 2 = -(m_e e^4/2)ln^2(b/(m_e^2e^3 ) ) \label{27}\ ] ] let us derive this formula .the starting point is the one - dimensional schrdinger equation : neglecting in comparison with and integrating ( [ 28 ] ) we get : where we assume , that is why is even .the next assumptions are : 1 .the finite range of the potential energy : for ; 2 . undergoes very small variations inside the well . since outside the well , we readily obtain : ^ 2 \;\ ; . \label{30}\ ] ] for ( condition for the potential to form a shallow well which means that the absolute value of the energy of ground level is much smaller than the absolute value of the potential in the well ) we get that , indeed , and that the variation of inside the well is small , .concerning the one - dimensional coulomb potential , it satisfies this condition only for .this explains why the accuracy of formula ( [ 27 ] ) is very poor .much more accurate equation for atomic energies in strong magnetic field was derived by b.m.karnakov and v.s.popov .it provides a several percent accuracy for the energies of even states for ( ) .main idea is to integrate shrdinger equation with effective potential from till , where and to equate obtained expression for to the logarithmic derivative of whittaker function - the solution of shrdinger equation with coulomb potential , which exponentially decreases at . in this way in following equation was obtained : where is the logarithmic derivative of the gamma function .the energies of the odd states are : so , for superstrong magnetic fields the deviations of odd states energies from the balmer series are negligible . from ( [ 32 ] ) we get an equation for : where has simple poles at .so to reproduce large number at left hand side should be large ( ground level ) or follow balmer series ( excited levels ) . when screening is taken into account an expression for effective potential transforms into \;\ ; \label{35}\ ] ] screening modifies the coulomb potential at the distances and since at these distances , the approach leading to ( [ 32 ] ) still works . the modified karnakov - popov equation , which takes screening into account looks like : we see that at freezing of the energies occur : left hand side of ( [ 37 ] ) approach constant when further grows . in particular , for a ground state at we obtain : , kev . energy levels on which lll is splitted in the hydrogen atom at are shown on fig ._ spectrum of hydrogen levels in the limit .energies are given in rydberg units , .in the previous section the spectrum of energies on which the lowest landau level ( lll ) splits in the proton electric field was found by solving the corresponding schrdinger equation . since the ground state energy of hydrogen in the limit of infinite equals kev ,the use of the nonrelativistic schrdinger equation is at least selfconsistent .however , the size of the electron wave function for in the direction transverse to the magnetic field is much smaller than the electron compton wavelength , , which makes the nonrelativistic approach a bit suspicious ( for ) . that is why in this section we will study the ground state energy of the electron in a hydrogen - like ion in the presence of an external magnetic field by analyzing the dirac equation . without taking screening into account this problemwas considered in paper ( see also , were results obtained in were reproduced ) , soon after it was found that a hydrogen - like ion becomes critical at : the electron ground level sinks into the lower continuum ( ) and the vacuum becomes unstable by spontaneous pairs production .these results were obtained by solving the dirac equation for an electron moving in the field of a nucleus of finite radius . that the phenomenon of criticality can be studied only in the framework of the dirac equationis an additional motivation for us to go from schrdinger to dirac . from the numerical solution of the dirac equation for the ground electron level of a hydrogen atom in the coulomb potential we will find that the corrections to the nonrelativistic results are small and that the estimate works well .let us parametrize bispinor which describes electron wave function in the following way : substituting in the dirac equation for the electron in an external electromagnetic field we obtain : taking vector potential which describes constant magnetic field directed along axis in the form , we get : where , .analogously we obtain : substituting two last expressions in the dirac equation we get : axial symmetry of electromagnetic field allows to determine dependence of the functions and : where , ... is the projection of electron angular momentum on axis . substituting ( [ 106 ] ) in ( [ 105 ] ) we get four linear equations for four unknown functions and ( here and below , ... ): where , , ...ground energy state has , .taking we should look for solution of ( [ 107 ] ) with : the dependence on is determined by ( [ 108 ] ) : substituting the last expressions in ( [ 109 ] ) and averaging over fast motion in transverse to the magnetic field plane we obtain two first order differential equations which describes electron motion along magnetic field in an effective potential : at large distances the effective potential equals coulomb , and the solutions of the equations ( [ 111 ] ) exponentially decreasing at are linear combinations of the whittaker functions . at short distances the equations ( [ 111 ] ) can be easily integrated for ( as far as condition for will be for sure valid for , which is equivalent to the following inequality : ) , where they looks like : the result of the integration is : where and , are normalization constants .the functions and have opposite parities ; for the ground state should be even , so , and matching logarithmic derivatives at the point we obtain : substituting proper combination of the whittaker functions for we obtain an algebraic equation for the ground state energy ( it coincides with eq .( 22 ) in in the limit , where is the nucleus radius ) : where is the euler constant , and the argument of the gamma function is given by for the ground level at one should take , while for it should be changed to .according to ( [ 114 ] ) when the magnetic field increases the ground state energy goes down and reaches the lower continuum .a matching point exists only if ( see ( [ 113 ] ) ) and ( [ 114 ] ) is valid only for these values of the magnetic field .thus , without taking screening into account , from ( [ 114 ] ) we can obtain the dependence of the ground state energy of a hydrogen atom on the magnetic field for .screening modifies the coulomb potential at distances smaller than the electron compton wave length , and from the condition we get .it means that at the phenomena of screening does not allow to find analytically the ground state energy . in order to find the ground state energy at and to take screening into account the equations ( [ 111 ] ) were solved numerically .this system can be transformed into one second order differential equation for . by substituting a schrdinger - like equation for the function obtained in : where is the energy eigenvalue of the dirac equation and is given in ( [ 112 ] ) . an equation ( [ 116 ] ) was integrated numerically .let us note that , while for the last three terms in the expression for are much smaller than the first one ( the only one remaining in the nonrelativistic approximation ) , at the relativistic terms dominate and are very big for at which makes numerical calculations very complicated ..[tab:1]values of for without screening obtained from the schrdinger and dirac equations .they start to differ substantially at enormous values of the magnetic field . [ cols="^,^,^,^,^ " , ] textbooks contain detailed consideration of the phenomenon of critical charge .an analytical formula for the coulomb potential in a superstrong magnetic field has been derived .it reproduces the results of the numerical calculations made in with good accuracy . using it , an algebraic formula for the energy spectrum of the levels of a hydrogen atom originating from the lowest landau level in a superstrong has been obtained .the energies start to deviate from those obtained without taking the screening of the coulomb potential into account at gauss and the energy of ground state in the limit remains finite .a magnetic field plays a double role in the critical charge phenomenon . by squeezing the electron wave function and putting it in the domain of a stronger coulomb potential it diminishes the value of the critical charge substantially .however , for nuclei with to become critical such a strong is needed that the screening of the coulomb potential occurs and acts in the opposite direction : the electron ground state energy freezes and the nucleus remains subcritical in spite of growing . 99 v.b .berestetskii , e.m .lifshitz , and l.p .pitaevsky , quantum electrodynamics , theoretical physics , v. iv , m. : fizmatlit , 2001 , p. 562 .shabad , and v.v .usov , phys .* 98 * ( 2007 ) 180403 ; phys . rev .* d77 * ( 2008 ) 025001 .vysotsky , pisma v zhetf * 92 * ( 2010 ) 22 [ jetp lett .92 ( 2010 ) 16 ] .b. machet and m.i .vysotsky , phys .d * 83 * , 025022 ( 2011 ) .godunov , b. machet , and m.i .vysotsky , phys .d * 85 * , 044058 ( 2012 ) .j. schwinger , phys . rev . * 128 * ( 1962 ) 2425 .akhiezer , and v.b .berestetskii , quantum electrodynamics m. : nauka , 1981 , appendix * a2 * , p. 422 .j. schwinger , phys .* 82 * ( 1951 ) 664 .g. calucci and r. ragazzon , j. phys .a * 27 * ( 1994 ) 2161 .skobelev , izvestiya vysshikh uchebnykh zavedenii , fizika * 10 * ( 1975 ) 142 [ russian physics journal , v.18 , n.10 ( 1975 ) 1481 ] ; + yu.m .loskutov , and v.v .skobelev , phys . lett .* a56 * ( 1976 ) 151 .a. chodos , k. everding , and d.a .owen , phys .d * 42 * ( 1990 ) 2881 ; + v.p .gusynin , and a.v .smilga , phys .b * 450 * ( 1999 ) 267 . v.b .berestetskii , proceedings of liyaf winter school * 9 , part 3 * , 95 ( 1974 ) .peskin , and d.v .schroeder , an introduction to quantum field theory , addison - wesley , 1995 .shiff , and h. snyder , phys* 55 * ( 1939 ) 59 .berestetskii , e.m .lifshitz , and l.p .pitaevsky , quantum electrodynamics , theoretical physics , v. iv , m. : fizmatlit , 2001 , p. 147 .landau , and e.m .lifshitz , quantum mechanics , theoretical physics , v. iii , m : fizmatlit , 2001 , p. 556, problem 1 .landau , and e.m .lifshitz , quantum mechanics , theoretical physics , v. iii , m : fizmatlit , 2001 , p. 558, problem 3 .karnakov , and v.s .popov , zhetf * 124 * ( 2003 ) 996 [ j. exp .physics * 97 * ( 2003 ) 890 ] ; zhetf * 141 * ( 2012 ) 5 .oraevskii , a.i .rez , and v.b .semikoz , zh .* 72 * ( 1977 ) 820 [ sov .jetp * 45 * ( 1977 ) 428 ] .p. schlter , g. soff , k .- h .wietschorke , and w. greiner , j. phys .b : at . mol .phys . * 18 * ( 1985 ) 1685 .popov , pisma zh .* 11 * ( 1970 ) 254 ; + v.s .popov , zh .* 59 * ( 1970 ) 965 ; + ya.b .zeldovich , and v.s .popov , ufn * 105 * ( 1971 ) 403 ; + v.s .popov , yad .* 14 * ( 1971 ) 458 .w. greiner , and j. reinhardt , `` quantum electrodynamics '' springer - verlag ( 1992 ) berlin , heidelberg ; + w. greiner , b. mller , and j. rafelski , `` quantum electrodynamics of strong fields '' springer - verlag ( 1985 ) berlin , heidelberg . | the expanded variant of the lectures delivered at the 39th itep winter school in 2011 . |
in the public goods game , each player in the population chooses whether to contribute ( ) or not to contribute ( ) to the common pool . given a fixed _ rate of return _ , the resulting payoff of player is then .we shall call the game s _ marginal per - capita rate of return _ and denote it as . note that for simplicity , but without loss of generality , we have assumed that the group is the whole population . in the absence of restrictions on the interaction range of players , i.e. , in well - mixed populations , the size of the groups and their formation can be shown to be of no relevance in our case , as long as rather than is considered as the effective rate of return .the directional learning dynamics is implemented as follows .suppose the above game is infinitely repeated at time steps , and suppose further that , at time , plays with probability ] restriction .we begin with a formal definition of the _ equilibrium_. in particular , a pure strategy imputation is a -strong equilibrium of our ( symmetric ) public goods game if , for all with , for all for any alternative pure strategy set for . as noted in the previous section ,this definition bridges , one the one hand , the concept of the nash equilibrium in pure strategies in the sense that any equilibrium with is also a nash equilibrium , and , on the other hand , that of the ( aumann-)strong equilibrium in the sense that any equilibrium with is aumann strong .equilibria in between ( for ) are `` more stable '' than a nash equilibrium , but `` less stable '' than an aumann - strong equilibrium .the maximal -strengths of the equilibria in our public goods game as a function of are depicted in fig .[ ana ] for .the cyan - shaded region indicates the `` public bad game '' region for ( ) , where the individual and the public motives in terms of the nash equilibrium of the game are aligned towards defection . here for all is the unique aumann - strong equilibrium , or in terms of the definition of the equilibrium , for all is for all ] .if we add perturbations of order to the unperturbed public goods game with directional learning that we have introduced in section 2 , there exist stationary distributions of and the following proposition can be proven . in the following ,we denote by `` '' the maximal of an equilibrium .proposition : : : as , starting at any , the expectation with respect to the stationary distribution is > 1/2 ] if ./\partial \epsilon<0 ] if . moreover , /\partial \delta>0 ] if .finally , / \partial k<0 ] if , and \rightarrow 0 $ ] if .lastly , we simulate the perturbed public goods game with directional learning and determine the actual average contribution levels in the stationary state .color encoded results in dependence on the normalized rate of return and the responsiveness of players to the success of their past actions ( alternatively , the sensitivity of the individual learning process ) are presented in fig .[ simul ] for .small values of lead to a close convergence to the respective nash equilibrium of the game , regardless of the value of . as the value of increases , the pure nash equilibria erode and give way to a mixed outcome .it is important to emphasize that this is in agreement , or rather , this is in fact a consequence of the low of the non - contribution pure equilibria ( see fig [ ana ] ) . within intermediate tolarge values the nash equilibria are implemented in a zonal rather than pinpoint way . when the nash equilibrium is such that all players contribute ( ) , then small values of lead to more efficient aggregate play ( recall any such equilibriumis ) .conversely , by the same logic , when the nash equilibrium is characterized by universal free - riding , then larger values of lead to more efficient aggregate play .moreover , the precision of implementation also depends on the rate of return in the sense that uncoordinated deviations of groups of players lead to more efficient outcomes the higher the rate of return .in other words , the free - riding problem is mitigated if group deviations lead to higher payoffs for every member of an uncoordinated deviation group , the minimum size of which ( that in turn is related to the maximal strength of equilibrium ) is decreasing with the rate of return .simulations also confirm that the evolutionary outcome is qualitatively invariant to : i ) the value of as long as the latter is bounded away from zero , although longer convergence times are an inevitable consequence of very small values ( see fig . [ eps ] ) ; ii ) the replication of the population ( i.e. , making the whole population a group ) and the random remixing between groups ; and iii ) the population size , although here again the convergence times are the shorter the smaller the population size .while both ii and iii are a direct consequence of the fact that we have considered the public goods game in a well - mixed rather than a structured population ( where players would have a limited interaction range and where thus pattern formation could play a decisive role ) , the qualitative invariance to the value of is elucidated further in fig .we would like to note that by `` qualitative invariance '' it is meant that , regardless of the value of , the population always diverges away from the nash equilibrium towards a stable mixed stationary state . but as can be observed in fig .[ eps ] , the average contribution level and its variance both increase slightly as increases .this is reasonable if one considers as an exploration or mutation rate .more precisely , it can be observed that , the lower the value of , the longer it takes for the population to move away from the nash equilibrium where everybody contributes zero in the case that ( which was also the initial condition for clarity ) .however , as soon as initial deviations ( from in this case ) emerge ( with probability proportional to , the neutral rule in the original learning dynamics takes over , and this drives the population towards a stable mixed stationary state .importantly , even if the value of is extremely small , the random drift sooner or later gains momentum and eventually yields similar contribution levels as those attainable with larger values of .most importantly , note that there is a discontinuous jump towards staying in the nash equilibrium , which occurs only if is exactly zero .if is bounded away from zero , then the free - riding nash equilibrium erodes unless it is ( for very low values of ) .we have introduced a public goods game with directional learning , and we have studied how the level of contributions to the common pool depends on the rate of return and the responsiveness of individuals to the successes and failures of their own past actions .we have shown that directional learning alone suffices to explain deviations from the nash equilibrium in the stationary state of the public goods game .even though players have no strategically relevant information about the game and/ or about each others actions , the population could still end up in a mixed stationary state where some players contributed at least part of the time although the nash equilibrium would be full free - riding .vice versa , defectors emerged where cooperation was clearly the best strategy to play .we have explained these evolutionary outcomes by introducing the concept of equilibria , which bridge the gap between nash and aumann - strong equilibria .we have demonstrated that the lower the maximal and the higher the responsiveness of individuals to the consequences of their own past strategy choices , the more likely it is for the population to ( mis)learn what is the objectively optimal unilateral ( nash ) play .these results have some rather exciting implications .foremost , the fact that the provisioning of public goods even under adverse conditions can be explained without any sophisticated and often lengthy arguments involving selflessness or social preference holds promise of significant simplifications of the rationale behind seemingly irrational individual behavior in sizable groups .it is simply enough for a critical number ( depending on the size of the group and the rate of return ) of individuals to make a `` wrong choice '' at the same time once , and if only the learning process is sufficiently fast or naive , the whole subpopulation is likely to adopt this wrong choice as their own at least part of the time . in many real - world situations , where the rationality of decision making is often compromised due to stress , propaganda or peer pressure , such `` wrong choices ''are likely to proliferate .as we have shown in the context of public goods games , sometimes this means more prosocial behavior , but it can also mean more free - riding , depending only on the rate of return .the power of directional ( mis)learning to stabilize unilaterally suboptimal game play of course takes nothing away from the more traditional and established explanations , but it does bring to the table an interesting option that might be appealing in many real - life situations , also those that extend beyond the provisioning of public goods . fashion trends or viral tweets and videos might all share a component of directional learning before acquiring mainstream success and recognition .we hope that our study will be inspirational for further research in this direction .the consideration of directional learning in structured populations , for example , appears to be a particularly exciting future venture .for the characterization of the stationary states , we introduce the concept of equilibria , which nests both the nash equilibrium and the aumann - strong equilibrium as two special cases .while the nash equilibrium describes the robustness of an outcome against unilateral ( -person ) deviations , the aumann - strong equilibrium describes the robustness of an outcome against the deviations of any subgroup of the population .an equilibrium is said to be ( aumann-)strong if it is robust against deviations of the whole population or indeed of any conceivable subgroup of the population , which is indeed rare .our definition of the equilibrium bridges the two extreme cases , measuring the size of the group ( at or above nash ) and hence the degree to which an equilibrium is stable .we note that our concept is related to coalition - proof equilibrium . in the public goods game , the free - riding nash equilibrium is typically also more than but never .as we will show , the maximal strength of an equilibrium translates directly to the level of contributions in the stationary distribution of our process , which is additionally determined by the normalized rate of return and the responsiveness of players to the success of their past actions , i.e. , the sensitivity of the individual learning process .this research was supported by the european commission through the erc advanced investigator grant ` momentum ' ( grant 324247 ) , by the slovenian research agency ( grant p5 - 0027 ) , and by the deanship of scientific research , king abdulaziz university ( grant 76 - 130 - 35-hici ) .ledyard , j. o. public goods : a survey of experimental research . in the handbook of experimental economics , kagel , j. h. and roth , a. e. , editors , 111194 .princeton university press , princeton , nj ( 1997 ) . | we consider an environment where players are involved in a public goods game and must decide repeatedly whether to make an individual contribution or not . however , players lack strategically relevant information about the game and about the other players in the population . the resulting behavior of players is completely uncoupled from such information , and the individual strategy adjustment dynamics are driven only by reinforcement feedbacks from each player s own past . we show that the resulting `` directional learning '' is sufficient to explain cooperative deviations away from the nash equilibrium . we introduce the concept of equilibria , which nest both the nash equilibrium and the aumann - strong equilibrium as two special cases , and we show that , together with the parameters of the learning model , the maximal of equilibrium determines the stationary distribution . the provisioning of public goods can be secured even under adverse conditions , as long as players are sufficiently responsive to the changes in their own payoffs and adjust their actions accordingly . substantial levels of public cooperation can thus be explained without arguments involving selflessness or social preferences , solely on the basis of uncoordinated directional ( mis)learning . cooperation in sizable groups has been identified as one of the pillars of our remarkable evolutionary success . while between - group conflicts and the necessity for alloparental care are often cited as the likely sources of the other - regarding abilities of the genus _ homo _ , it is still debated what made us the `` supercooperators '' that we are today . research in the realm of evolutionary game theory has identified a number of different mechanisms by means of which cooperation might be promoted , ranging from different types of reciprocity and group selection to positive interactions , risk of collective failure , and static network structure . the public goods game , in particular , is established as an archetypical context that succinctly captures the social dilemma that may result from a conflict between group interest and individual interests . in its simplest form , the game requires that players decide whether to contribute to a common pool or not . regardless of the chosen strategy by the player himself , he receives an equal share of the public good which results from total contributions being multiplied by a fixed rate of return . for typical rates of return it is the case that , while the individual temptation is to free - ride on the contributions of the other players , it is in the interest of the collective for everyone to contribute . without additional mechanisms such as punishment , contribution decisions in such situations approach the free - riding nash equilibrium over time and thus lead to a `` tragedy of the commons '' . nevertheless , there is rich experimental evidence that the contributions are sensitive to the rate of return and positive interactions , and there is evidence in favor of the fact that social preferences and beliefs about other players decisions are at the heart of individual decisions in public goods environments . in this paper , however , we shall consider an environment where players have no strategically relevant information about the game and/ or about other players , and hence explanations in terms of social preferences and beliefs are not germane . instead , we shall propose a simple learning model , where players may mutually reinforce learning off the equilibrium path . as we will show , this phenomenon provides an alternative and simple explanation for why contributions rise with the rate of return , as well as why , even under adverse conditions , public cooperation may still prevail . previous explanations of this experimental regularity are based on individual - level costs of ` error ' . suppose each player knows neither who the other players are , nor what they earn , nor how many there are , nor what they do , nor what they did , nor what the rate of return of the underlying public goods game is . players do not even know whether the underlying rate of return stays constant over time ( even though in reality it does ) because their own payoffs are changing due to the strategy adjustments of other players , about which they have no information . without any such knowledge , players are unable to determine ex ante whether contributing or not contributing is the better strategy in any given period , i.e. , players have no strategically relevant information about how to respond best . as a result , the behavior of players has to be _ completely uncoupled _ , and their strategy adjustment dynamics are likely to follow a form of _ reinforcement _ feedback or , as we shall call it , _ directional learning _ . we note that , in our model , due to the one - dimensionality of the strategy space , reinforcement and directional learning are both adequate terminologies for our learning model . since reinforcement applies also to general strategy spaces and is therefore more general we will prefer the terminology of directional learning . indeed , such directional learning behavior has been observed in recent public goods experiments . the important question is how _ well _ will the population learn to play the public goods game despite the lack of strategically relevant information . note that _ well _ here has two meanings due to the conflict between private and collective interests : on the one hand , how close will the population get to playing the nash equilibrium , and , on the other hand , how close will the population get to playing the socially desirable outcome . the learning model considered in this paper is based on a particularly simple `` directional learning '' algorithm which we shall now explain . suppose each player plays both cooperation ( contributing to the common pool ) and defection ( not contributing ) with a mixed strategy and updates the weights for the two strategies based on their relative performances in previous rounds of the game . in particular , a player will increase its weight on contributing if a previous - round switch from not contributing to contributing led to a higher realized payoff or if a previous - round switch from contributing to not contributing led to a lower realized payoff . similarly , a player will decrease its weight on contributing if a previous - round switch from contributing to not contributing led to a higher realized payoff or if a previous - round switch from not contributing to contributing led to a lower realized payoff . for simplicity , we assume that players make these adjustments at a fixed incremental step size , even though this could easily be generalized . in essence , each player adjusts its mixed strategy directionally depending on a markovian performance assessment of whether a previous - round contribution increase / decrease led to a higher / lower payoff . since the mixed strategy weights represent a well - ordered strategy set , the resulting model is related to the directional learning/ aspiration adjustment models , and similar models have previously been proposed for bid adjustments in assignment games , as well as in two - player games . in the dynamic leads to stable cooperative outcomes that maximize total payoffs , while nash equilibria are reached in . the crucial difference between these previous studies and our present study is that our model involves more than two players in a voluntary contributions setting , and , as a result , that there can be interdependent directional adjustments of groups of players including more than one but not all the players . this can lead to uncoordinated ( mis)learning of subpopulations in the game . consider the following example . suppose all players in a large standard public goods game do not contribute to start with . then suppose that a player in a subpopulation uncoordinatedly but by chance simultaneously decide to contribute . if this group is sufficiently large ( the size of which depends on the rate of return ) , then this will result in higher payoffs for all players including the contributors , despite the fact that not contributing is the dominant strategy in terms of unilateral replies . in our model , if indeed this generates higher payoffs for all players including the freshly - turned contributors , then the freshly - turned contributors would continue to increase their probability to contribute and thus increase the probability to trigger a form of stampede or herding effect , which may thus lead away from the nash equilibrium and towards a socially more beneficial outcome . our model of uncoordinated but mutually reinforcing deviations away from nash provides an alternative explanation for the following regularity that has been noted in experiments on public goods provision . namely , aggregate contribution levels are higher the higher the rate of return , despite the fact that the nash equilibrium remains unchanged ( at no - contribution ) . this regularity has previously been explained only at an individual level , namely that ` errors ' are less costly and therefore more likely the higher the rate of return , following quantal - response equilibrium arguments . by contrast , we provide a group - dynamic argument . note that the alternative explanation in terms of individual costs is not germane in our setting , because we have assumed that players have no information to make such assessments . it is in this sense that our explanation perfectly complements the explanation in terms of costs . in what follows , we present the results , where we first set up the model and then deliver our main conclusions . we discuss the implications of our results in section 3 . further details about the applied methodology are provided in the methods section . |
modern wireless communication systems , e.g. , cellular networks and wireless sensor networks ( wsns ) , are featured by larger bandwidth , higher data rate and lower communication delays . the improvement on communication quality and the increased data processing complexity have imposed higher requirement on the quality of power supply to wireless devices ( wds ) .conventionally , wds are powered by batteries , which have to be replaced / recharged manually once the energy is depleted .alternatively , the recent advance of radio frequency ( rf ) enabled wireless power transfer ( wpt ) provides an attractive solution to power wds over the air .by leveraging the far - field radiative properties of microwave , wds can harvest energy remotely from the rf signals radiated by the dedicated energy nodes ( ens ) .compared to the conventional battery - powered methods , wpt can save the cost due to manual battery replacement / recharging in many applications , and also improve the network performance by reducing energy outages of wds .currently , tens of microwatts ( ) rf power can be effectively transferred to a distance of more than meters .meters is about . ]the energy is sufficient to power the activities of many low - power communication devices , such as sensors and rf identification ( rfid ) tags . in the future, we expect more practical applications of rf - enabled wpt to wireless communications thanks to the rapid developments of many performance enhancing technologies , such as energy beamforming with multiple antennas and more efficient energy harvesting circuit designs . in a wireless powered communication network ( wpcn ) , the operations of wds , including data transmissions , are fully / partially powered by means of rf - enabled wpt . a tdma ( time division multiple access ) based protocol for wpcn is first proposed in , where the wds harvest rf energy broadcasted from a hybrid access point ( hap ) in the first time slot , and then use the harvested energy to transmit data back to the hap in the second time slot .later , extends the single - antenna hap in to a multi - antenna hap that enables more efficient energy transmission via energy beamforming as well as more spectrally efficient sdma ( space division multiple access ) based information transmission as compared to tdma . to further improve the spectral efficiency, considers using full - duplex hap in wpcns , where a hap can transmit energy and receive user data simultaneously via advanced self - interference cancelation techniques . intuitively , using a hap ( or co - located en and information ap ) , instead of two separated en and information access point ( ap ) , to provide information andenergy access is an economic way to save deployment cost , and the energy and information transmissions in the network can also be more efficiently coordinated by the hap .however , using hap has an inherent drawback that it may lead to a severe doubly - near - far " problem due to distance - dependent power loss .that is , the far - away users quickly deplete their batteries because they harvest less energy in the downlink ( dl ) but consume more energy in the uplink ( ul ) for information transmission . to tackle this problem , separately located ens and apsare considered to more flexibly balance the energy and information transmissions in wpcns . in this paper , we consider the method using either co - located or separate en and information ap to build a wpcn .most of the existing studies on wpcns focus on optimizing real - time resource allocation , e.g. , transmit signal power , waveforms and time slot lengths , based on instantaneous channel state information ( csi , e.g. , ) . in this paper, we are interested in the long - term network performance optimization based mainly on the average channel gains .it is worth mentioning that network optimizations in the two different time scales are complementary to each other in practice .that is , we use long - term performance optimization methods for the initial stage of network planning and deployment , while using short - term optimization methods for real - time network operations after the deployment .many current works on wpcns use stochastic models to study the long - term performance because of the analytical tractability , especially when the wds are mobile in location .for instance , applies a stochastic geometry model in a cellular network to derive the expression of transmission outage probability of wds as a function of the densities of ens and information aps .similar stochastic geometry technique is also applied to wpt - enabled cognitive radio network in to optimize the transmit power and node density for maximum primary / secondary network throughput . however , in many application scenarios , the locations of the wds are fixed , e.g. , a sensor network with sensor ( wd ) locations predetermined by the sensed objects , or an iot ( internet - of - things ) network with static wds . in this case , a practical problem that directly relates to the long - term performance of wpcns , e.g. , sensor s operating lifetime , is to determine the optimal locations of the ens and aps .nonetheless , this important node placement problem in wpcns is still lacking of concrete studies . in conventional battery - powered wireless communication networks ,node placement problem concerns the optimal locations of information aps only , which has been well investigated especially for wireless sensor networks using various geometric , graphical and optimization algorithms ( see e.g. , ) .however , there exist major differences between the node placement problems in battery - powered and wpt - enabled wireless communication networks . on one hand ,a common design objective in battery - powered wireless networks is to minimize the highest transmit power consumption among the wds to satisfy their individual transmission requirements .however , such energy - conservation oriented design is not necessarily optimal for wpcns , because high power consumption of any wd can now be replenished by means of wpt via deploying an en close to the wd . on the other hand , unlike information transmission , wpt will not induce harmful co - channel interference to unintended receivers , but instead can boost their energy harvesting performance .these evident differences indicate that the node placement problem in battery - powered wireless communication networks should be revisited for wpcns , to fully capture the advantages of wpt .in this paper , we study the node placement optimization problem in wpcns , which aims to minimize the deployment cost on ens and aps given that the energy harvesting and communication performances of all the wds are satisfied .our contributions are detailed below . 1 .we formulate the optimal node placement problem in wpcns using either separated or co - located en and ap . to simplify the analysis, we then transform the minimum - cost deployment problem into its equivalent form that optimizes the locations of fixed number of ens and aps ; 2 .the node placement optimization using separated en and ap is highly non - convex and hard to solve . to tackle the non - convexity of the problem, we first propose an efficient cluster - based greedy algorithm to optimize the locations of ens given fixed ap locations .then , a trial - and - error based algorithm is proposed to optimize the locations of aps given fixed ens locations .based on the obtained results , we further propose an effective alternating method that jointly optimizes the en and ap placements ; 3 .for the node placement optimization using co - located en and ap ( or hap ) , we extend the greedy en placement method under fixed aps to solving the hap placement optimization , which is achieved by incorporating additional considerations of dynamic wd - hap associations during hap placement .specifically , a trial - and - error method is used to solve the wd - hap association problem , which eventually leads to an efficient greedy hap placement algorithm . due to the non - convexity of the node placement problems in wpcns ,all the proposed algorithms are driven by the consideration of their applicabilities to large - size wpcns , e.g. , consisting of hundreds of wds and en / ap nodes .specifically , we show that the proposed algorithms for either separated or co - located en and ap placement are convergent and of low computational complexity .besides , simulations validate our analysis and show that the proposed methods can effectively reduce the network deployment cost to guarantee the given performance requirements .the proposed algorithms may find their wide application in the future deployment of wpcns , such as wireless sensor networks and iot networks .the rest of the paper is organized as follows . in sectionii , we first introduce the system models of wpcn where the ens and aps are either separated or co - located .then , we formulate the optimal node placement problems for the two cases in section iii , and propose efficient algorithms to solve the problems in sections iv and v , respectively . in section vi , simulations are performed to evaluate the performance of the proposed node placement methods .finally , we conclude the paper and discuss future working directions in section vii .for the case of separated ens and aps , we consider in fig . [61 ] a wpcn in consisting of ens , aps and wds , whose locations are denoted by coordinate vectors , , and , respectively .we assume that the energy and information transmissions are performed on orthogonal frequency bands without interfering with each other .specifically , the ens are connected to stable power source and broadcast rf energy in the dl for the wds to harvest the energy and store in their rechargeable batteries . at the same time , the wds use the battery power to transmit information to the aps in the ul .the circuit structure of a wd to perform the above operations is also shown in fig .[ 61 ] . in a transmission block of length ,the ens transmit simultaneously on the same bandwidth in the dl , where each en transmits ,\ i=1,\cdots , m.\ ] ] here , denotes the transmit power , denotes the pseudo - random energy signal used by the -th en , which is assumed to be of unit power ( =1 ] if ) .the reason to use random signal instead of a single sinusoid tone is to avoid peak in transmit power spectrum density , for satisfying the equivalent isotropically radiated power ( eirp ) requirement enforced by spectrum regulating authorities .notice that the energy beamforming technique proposed in is not used in our setup , as it requires accurate csi and dl symbol - level synchronization , which may be costly to implement in a highly distributed wpcn network considered in this work .accordingly , the received energy signal by the -th wd is where denotes the equivalent baseband channel coefficient from the -th en to the -th wd , which is assumed to be constant within a transmission block but may vary independently across different blocks .let denote the channel power gain , which follows a general distribution with its mean determined by the distance between the en and wd , i.e. , = \beta ||\mathbf{u}_i -\mathbf{w}_k||^{-d_d } , \ \i=1,\cdots , m , \ k = 1,\cdots k,\ ] ] where denotes the path loss exponent in dl , denotes the -norm operator , and with and denoting the downlink antenna power gain and carrier frequency , respectively . then , each wd can harvest an average amount of energy from the energy transmission within each block given by = \eta t p_0 \left ( \mathsmaller\sum_{i=1}^m h_{i , k}\right ) , \ k= 1,\cdots , k , \end{aligned}\ ] ] where ] denote the average energy harvesting rate over the variation of wireless channels ( s ) in different transmission blocks , we have in the ul information transmissions , we assume that each wd transmits data to only one of the aps . to make the placement problem tractable ,the wd - ap associations are assumed to be fixed , where each wd transmits to its nearest ap regardless of the instantaneous csi , i.e. , here , we assume no co - channel interference for the received user signals from different wds , e.g. , the wds transmit on orthogonal channels .besides , for the simplicity of analysis , we assume no limit on the maximum number of wds that an ap could receive data from . then , the average power consumption rate for wd is modeled as where denotes the constant circuit power of wd , denotes the average transmit power as a function of the distance between wd and its associated ap , and denotes the ul channel path loss exponent . besides , denotes a parameter related to the transmission strategy used in the ul communication . to achieve a target data rate and maximum allowed outage portability for wd , we have when truncated channel inverse transmission is used under rayleigh fading channel , where and denote the uplink antenna gain and carrier frequency , respectively , and denotes the exponential integral function . ] in general , the model in ( [ 11 ] ) indicates that the transmit power increases as a polynomial function of the distance between the transmitter and receiver to satisfy certain communication quality requirement , e.g. , minimum data rate or maximum allowed outage probability , which is widely used for wireless network performance analysis . a special case of the wpcn that we consider in fig . [ 61 ] is when the ens and aps are grouped into pairs and each pair of en and ap are co - located and integrated as a hybrid access point ( hap ) , which corresponds to setting and for . with the network model and hap s circuit structure shown in fig . [ 62 ] , a hap transfers rf power in the dl and receives information in the ul simultaneously on different frequency bands .although the use of haps is less flexible in placing the ens and aps than with separated ens and aps , the overall deployment cost is reduced , because the production and operation cost of a hap is in general less than the sum - cost of two separate en and ap . for brevity, we reuse the notation , , to denote the location coordinates of the haps . given other parameters unchanged ,the expression of the average energy harvesting rate of the -th wd is the same as that in ( [ 10 ] ) .meanwhile , the average power consumption rate can be obtained from ( [ 11 ] ) by replacing with as follows . where is the index of the hap that wd associates with , i.e. , with the above definitions , the _ net _ energy harvesting rates of the wds in both cases of separate and co - located en and ap are given by in practice , the net energy harvesting rate can directly translate to the performance of device operating lifetime ( see e.g. , ) .specifically , given an initial battery level , the average time before the -th wd s battery depletes is when , and when .in other words , given a minimum device operating lifetime requirement , it must satisfy if , and if .in this paper , we assume that the locations of the wds are known and study the optimal placement of ens and information aps , which are either separated or co - located in their locations .this may correspond to a sensor network with sensor ( wd ) locations predetermined by the sensed objects , or an iot network with static wds .in particular , we are interested in minimizing the deployment cost given that the net energy harvesting rates of all the wds are larger than a common prescribed value , i.e. , , where is set to achieve a desired device operating lifetime . when the ens and aps are separated , the total deployment cost is if ens and aps are used , where and are the monetary costs of deploying an en and an ap , respectively . to solve the minimum - cost deployment problem ,let us first consider the following feasibility problem : [ 8 ] ,\ \mathbf{v}^n = \left[\mathbf{v}_1,\cdots,\mathbf{v}_n\right]\\ & \text{s . t. } & & \lambda_k \left(\mathbf{u}^m\right ) -\mu_k\left(\mathbf{v}^n\right ) \geq \gamma , \ \k=1,\cdots , k , \label{56}\\ & & & \mathbf{b}^l\leq \mathbf{u}_i\leq \mathbf{b}^h,\ \ i=1,\cdots , m , \label{54}\\ & & & \mathbf{b}^l\leq \mathbf{v}_j\leq \mathbf{b}^h,\ \ j=1,\cdots , n , \label{55 } \end{aligned}\ ] ] where and are functions of and given in ( [ 10 ] ) and ( [ 11 ] ) , respectively . the inequalities in ( [ 54 ] ) and ( [ 55 ] ) denote element - wise relations . besides , specifies a feasible deployment area for both the ens and aps in , which is large enough to contain all the wds , i.e. , .evidently , if ( [ 8 ] ) can be efficiently solved for any and , then the optimal node placement solution to the considered minimum - cost deployment problem can be easily obtained through a simple two - dimension search over the values of and , i.e. , finding a pair of feasible that produces the lowest deployment cost . for a pair of fixed and ( and ), we can see that ( [ 8 ] ) is feasible if and only if the optimal objective of the following problem is no smaller than , i.e. , [ 57 ] we can express ( [ 57 ] ) as its equivalent epigraphic form , i.e. , given fixed and , ( [ 8 ] ) is feasible if and only if the optimal objective of ( [ 9 ] ) satisfies .then , the key difficulty of solving the optimal deployment problem is to find efficient solution for problem ( [ 9 ] ) .when the ens and aps are integrated as haps , the total deployment cost is if haps are used . here, denotes the cost of deploying a hap , where in general .similar to the case of separated ens and aps , the minimum - cost placement problem can be equivalently formulated as the following feasibility problem for any fixed number of haps , \\ \text{s . t. } & & & \lambda_k \left(\mathbf{u}^m\right ) -\mu_k\left(\mathbf{u}^m\right)\geq \gamma , \ \k=1,\cdots , k,\\ & & & \mathbf{b}^l\leq \mathbf{u}_i\leq \mathbf{b}^h,\ \ i=1,\cdots , m , \end{aligned}\ ] ] where and are given in ( [ 10 ] ) and ( [ 17 ] ) , respectively .notice that the study on co - located ens and aps is not a special case of that of separated ens and aps .in fact , it adds extra constraints ( ) to ( [ 8 ] ) , which leave less flexibility to the nodes placement design and make the problem more challenging to solve .equivalently , the feasibility of ( [ 18 ] ) can be determined by solving the following optimization problem and then comparing the optimal objective with , to see whether holds . in the following sections iv and v, we propose efficient algorithms to solve problems ( [ 9 ] ) and ( [ 14 ] ) , respectively .it is worth mentioning that the placement solution to ( [ 9 ] ) and ( [ 14 ] ) can be at arbitrary locations .when an en ( or a hap ) is placed at a location very close to an wd , the far - field channel model in ( [ 3 ] ) may be inaccurate .however , we learn from ( [ 9 ] ) and ( [ 14 ] ) that the optimal value is determined by the performance - bottleneck wd that is far away from the ens and aps ( i.e. , the channel model in ( 3 ) applies practically ) , thus having very low energy harvesting rate and high transmit power consumption . therefore , the potential inaccuracy of ( [ 3 ] ) will not affect the objective values of ( [ 9 ] ) and ( [ 14 ] ) , and the proposed algorithms in this paper are valid in practice .in this section , we study the node placement optimization for separately located en and ap in problem ( [ 9 ] ) .specifically , we first study in section iv.a the method to optimize en placement assuming that the locations of aps are fixed in a wpcn . in section iv.b , we further study the method to optimize the placement of aps given fixed en locations . based on the obtained results ,we then propose in section iv.c an alternating method to jointly optimize the placements of ens and aps .in addition , an alternative local searching method is considered in section iv.d for performance comparison .we first consider the optimal en placement problem when the locations of the aps are fixed , i.e. , s are known . in this case , the wd - ap association is known for each wd from ( [ 22 ] ) , and s can be calculated accordingly from ( [ 11 ] ) .it is worth mentioning that the proposed algorithms under the fixed ap setup can be directly extended to solve en placement problem in other wireless powered networks not necessarily for communication purpose , e.g. , a sensor network whose energy is mainly consumed on sensing and processing data , as long as the energy consumption rates s are known parameters . with s and s being fixed , we can rewrite ( [ 9 ] ) as where .we can see that ( [ 13 ] ) is a non - convex optimization problem , because is neither a convex nor concave function in .as it currently lacks of effective method to convert ( [ 13 ] ) into a convex optimization problem , the optimal solution is in general hard to obtain .however , for a special case with , i.e. , placing only one en , the optimal solution is obtained in the following . by setting , ( [ 13 ] ) can be rewritten as [ 15 ] although ( [ 15 ] ) is still a non - convex optimization problem ( as is not a concave function in ) , it is indeed a convex feasibility problem over when is fixed , which can be efficiently solved using the interior point method .therefore , the optimal solution of ( [ 15 ] ) can be obtained using a bi - section search method over , whose pseudo - code is given in algorithm [ 46 ] .notice that the right hand side ( rhs ) of ( [ 58 ] ) is always positive during the bisection search over .besides , we can infer that algorithm [ 46 ] converges to the optimal solution , because problem ( [ 15 ] ) is feasible for and infeasible otherwise .the total number of feasibility tests performed is ] , where is a parameter corresponding to a solution precision requirement .besides , the time complexity of solving each convex feasibility test using the interior point method is .therefore , the overall time complexity of algorithm [ 41 ] is , which is moderate even for a large - size network consisting of , e.g. , tens of ens and hundreds of wds .* initialization * : clustering the wds into with s , calculate s using ( [ 22 ] ) and ( [ 11 ] ) * return * we then study in this subsection the method to optimize the placement of aps given fixed en locations , i.e. , s are known . in this case , s are fixed and can be calculated using ( [ 10 ] ) . with s being fixed parameters , we can substitute ( [ 11 ] ) into ( [ 9 ] ) and formulate the optimal ap placement problem under fixed ens as follows where is the index of ap that wd associates with given in ( [ 22 ] ) .the above problem is non - convex because of the combinatorial nature of wd - ap associations , i.e. , s are discrete indicators .however , notice that if s are known , ( [ 27 ] ) is a convex problem that is easily solvable .in practice , however , s are revealed only after ( [ 27 ] ) is solved and the placement of aps is obtained . to resolve this conflict, we propose in the following a _ trial - and - error _ method to find feasible s and accordingly a feasible ap placement solution to ( [ 27 ] ) .the pseudo - code of the method to solve ( [ 27 ] ) is presented in algorithm [ 42 ] and explained as follows .as its name suggests , we first convert ( [ 27 ] ) into a convex problem by assuming a set of wd - ap associations , denoted by , , and then solve ( [ 27 ] ) for the optimal ap placement based on the assumed s .next , we compare s with the actual wd - ap associations after the optimal ap placement is obtained using ( [ 22 ] ) , denoted by , .specifically , we check if .if yes , we have obtained a feasible solution to ( [ 27 ] ) ; otherwise , we update , and repeat the above process until , .the convergence of algorithm [ 42 ] is proved in the appendix and the convergence rate is evaluated numerically in fig . [ 64 ] of section vi .intuitively , the trial - and - error method is convergent because the optimal value of ( [ 27 ] ) is bounded , while by updating , we can always improve the optimal objective value of ( [ 27 ] ) in the next round of solving it . as we will show later in fig .[ 64 ] of section vi , the number of iterations used until convergence is of constant order , i.e. , , regardless of the value of or . there , the time complexity of algorithm [ 42 ] is , as it takes this time complexity for solving ( [ 27 ] ) in each iteration .* initialization * : + separate the wds into clusters , and place each ap at a cluster center .use s to denote the initial ap locations with s , calculate s using ( [ 22 ] ) with s , calculate s using ( [ 10 ] ) .let in this subsection , we further study the problem of joint en and ap placement optimization . in this case , we consider both the locations of ens and aps as variables , such that the joint en - ap placement problem in ( [ 9 ] ) can be expressed as evidently , the optimization problem is highly non - convex because of the non - convex function and the discrete variables s . based on the results in section iv.a and iv.b , we propose an alternating method in algorithm [ 43 ] to solve ( [ 16 ] ) for joint en and ap placement solution . specifically , starting with a feasible ap placement , we alternately apply algorithms [ 41 ] and [ 42 ] to iteratively update the locations of ens and aps , respectively .a point to notice is that algorithm [ 41 ] ( and algorithm [ 42 ] ) only produces a sub - optimal solution to ( [ 13 ] ) ( and ( [ 27 ] ) ) , thus the objective value of ( [ 16 ] ) may decrease during the alternating iterations . to cope with this problem, we record the deployment solutions obtained in iterations and select the one with the best performance .the impact of the parameter to the algorithm performance is evaluated in fig . [ 68 ] of section vi .given the complexities of algorithms [ 41 ] and [ 42 ] , we can easily infer that the time complexity of algorithm [ 43 ] is .* initialize * : separate the wds into clusters , and place each ap at a cluster center .use s to denote the initial ap locations * return : * s and s .besides the proposed alternating method for solving ( [ 16 ] ) , we also consider an alternative local searching method used as benchmark algorithm for performance comparison .the local searching algorithm starts with a random deployment of the ens and aps , i.e. , s and s , and checks if the minimum net energy harvesting rate among the wds , i.e. , can be increased by making a random movement to s and s that satisfy where is a fixed positive parameter . if yes , it makes the move and repeats the random movement process . otherwise ,if can not be increased , the algorithm has reached a local maximum and returns the current placement solution .several off - the - shelf local searching algorithms are available , where simulated annealing is used in this paper .in particular , simulated annealing can improve the searching result by allowing the nodes to be moved to locations with decreased value of to reduce the chance of being trapped at local maximums .besides , we can improve the quality of deployment solution using different initial node placements , which are obtained either randomly or empirically , and select the resulted solution with the best performance .in this section , we proceed to study the node placement optimization problem ( [ 14 ] ) for the case of co - located ens and aps .the problem is still non - convex due to which the optimal solution is hard to be obtained .inspired by both algorithms [ 41 ] and [ 42 ] , we propose in this section an efficient greedy algorithm for hap placement optimization .the node placement optimization problem ( [ 14 ] ) is highly non - convex , because the expression of problem ( [ 14 ] ) involves non - convex function in and minimum operator over convex functions in . sinceits optimal solution is hard to obtain , a promising alternative is the greedy algorithm , which iteratively places a single hap to the network at one time , similar to algorithm [ 41 ] for solving ( [ 13 ] ) which optimizes the en locations given fixed aps .however , by comparing problems ( [ 14 ] ) and ( [ 13 ] ) , we can see that the algorithm design for solving ( [ 14 ] ) is more complicated , because each is now a function of s , instead of constant parameter in ( [ 13 ] ) .similar to the greedy algorithm in section iv.a , we first separate the wds into non - overlapping clusters , denoted by , and add to the network a hap in each iteration .specifically , in the -th iteration , given that the previous haps are fixed , we obtain the optimal location of the -th hap , denoted by , by maximizing the net energy harvesting rates of the wds in the first clusters . to simplify the notations , we also use as in section iv.a to denote the accumulative rf harvesting power of the wd from the previously placed haps , which can be calculated using ( [ 53 ] ) .besides , let denote the energy consumption rate of the -th wd after the first haps have been placed , where notice that the only difference between placing the -th hap and the -th en in section iv.a is that is now a function of instead of a given constant . by substituting ( [ 52 ] ) into ( [ 35 ] ) , the optimal location of the -th hap is obtained by solving the following problem [ 49 ] where from ( [ 32 ] ) , we can see that a wd may change its association to the -th hap , if the newly placed hap is closer to the wd than all the other haps that have been previously deployed .this combinatorial nature of wd - ap associations makes problem ( [ 49 ] ) non - convex even if is fixed . in the following ,we apply the similar trial - and - error technique as that in section iv.b to obtain a feasible solution to problem ( [ 49 ] ) . the basic idea to obtain a feasible solution of ( [ 49 ] ) is to convert it into a convex problem given , and then use simple bi - section search over .the convexification of ( [ 49 ] ) is achieved by a trial - and - error method similar to that used for finding feasible wd - ap associations proposed in algorithm [ 42 ] .that is , we iteratively make assumptions on wd - ap associations and update the optimal placement of the -th hap obtained from solving ( [ 49 ] ) based on the assumptions in the current iteration . with a bit abuse of notations , herewe reuse in each iteration as the optimal location of the -th hap given the current wd - ap association assumptions .specifically , we assume whether the wds change their associations after the -th hap is added , i.e. , assuming either or for each .then , given a fixed , each constraint on in ( [ 51 ] ) belongs to one of the following four cases : if we assume that wd does not change its wd - hap association after the -th hap is placed into the wpcn , or equivalently , we can replace the corresponding constraint in ( [ 51 ] ) with with a fixed , ( [ 47 ] ) is a convex constraint if .if we still assume , while holds , we can safely drop the constraint in ( [ 51 ] ) without changing the feasible region of . on the other occasion ,if we assume that wd changes its wd - hap association , or , the corresponding constraint in ( [ 51 ] ) becomes which can be further expressed as latexmath:[\[\label{33 } notice that , given a fixed , ( [ 33 ] ) is a convex constraint if .otherwise , if we assume and holds , ( [ 33 ] ) is a non - convex constraint , as the left - hand - side ( lhs ) of ( [ 33 ] ) is the difference of two convex functions .nonetheless , we show that ( [ 33 ] ) can still be converted into a convex constraint in this case .let us first consider a function where and .we calculate the first order derivative of and find that increases monotonically when ^{1/d_u } \triangleq \tau_{i , k},\ ] ] and decreases monotonically if .notice that and always hold .therefore , although is not a convex function , can still be equivalently expressed as , with being some positive number satisfying .the value of can be efficiently obtained using many off - the - shelf numerical methods , such as the classic newton s method or bi - section search method .a close comparison between the lhs of ( [ 33 ] ) and in ( [ 36 ] ) shows that , by letting , we can equivalently express ( [ 33 ] ) as a convex constraint latexmath:[\[\label{37 } holds . to sum up , given a fixed , we tackle the -th constraint in ( [ 51 ] ) using one of the following methods : 1 .replace by ( [ 47 ] ) if assuming and ; 2 .drop the constraint if assuming and ; 3 .replace by ( [ 33 ] ) if assuming and ; 4 . replace by ( [ 37 ] ) if assuming and .after processing all the constraints in ( [ 51 ] ) , we can convert ( [ 49 ] ) into a convex feasibility problem given a set of wd - hap association assumptions and a fixed .accordingly , the optimal placement of the -th hap ( ) under the assumptions , can be efficiently obtained from solving ( [ 49 ] ) using a bi - section search method over .similar to the trial - and - error technique used in algorithm [ 42 ] , we check if the obtained satisfies all the assumptions made .if yes , we have obtained a feasible solution of ( [ 49 ] ) . otherwise , we switch the violating assumptions , then follow the above constraint processing method to resolve ( [ 49 ] ) for a new , and repeat the iterations until all the assumptions are satisfied .the above trial - and - error method converges .the proof follows the similar argument as given in the appendix , which proves the convergence of the trial - and - error method used for solving problem ( [ 27 ] ) .thus , this proof is omitted here . since the placement of a single hap can be obtained via solving ( [ 49 ] ) , we can iteratively place the haps into the wpcn .the pseudo - code of the revised greedy algorithm is presented in algorithm [ 44 ] .for example , fig .[ 66 ] illustrates the detailed steps taken to place the hap , or hap ( total haps while the first hap , or hap is already placed ) .specifically , we first assume in fig . [ 66](a ) that all the wds in the cluster associate with hap , and the wds in the cluster associate with hap , after hap is added into the network .then , we obtain in fig . [66](b ) the optimal placement of the hap based on the association assumptions made .however , the obtained location of hap results in a contradiction with the association assumption made on wd ( assumed to be associated with hap ) .therefore , we change the association assumption of wd to hap , and recalculate the optimal placement solution for hap ( fig .[ 66](c ) ) . in fig .[ 66](d ) , the newly obtained location of hap satisfies all the association assumptions , thus the placement of hap is feasible .following the similar argument in the appendix , the association assumption update procedure converges , because the optimal objective value of ( [ 49 ] ) is non - decreasing upon each association assumptions update ( lines of algorithm [ 44 ] ) . after obtaining the location of hap , a feasible location of the hap , hap , can also be obtained using the similar procedures as above . besides, we can infer that the time complexity of algorithm [ 44 ] is , because it places the haps iteratively , while the trial - and - error method used to place each hap needs complexity .cluster the wds into * return * the hap locations .in this section , we use simulations to evaluate the performance of the proposed node placement methods .all the computations are executed by matlab on a computer with an intel core i5 -ghz cpu and gb of memory .the carrier frequency is mhz for both dl and ul transmissions operating on different bandwidths . in the dl energy transmission, we consider using powercast tx91501 - 1w power transmitter with ( watt ) transmit power , and p2110 receiver with db antenna gain and energy harvesting efficiency . besides , we assume the path loss exponent , thus . in the ul information transmission , we assume that , and for , where is obtained assuming rayleigh fading and the use of truncated channel inversion transmission with receiver signal power ( equivalent to snr target with - noise power ) and outage probability of .all the wds , ens and aps are placed within a box region specified by and . unless otherwise stated , each point in the following figures is an average performance of random wd placements , each with wds uniformly placed within the box region .we first evaluate the performance of the proposed alternating optimization method ( algorithm [ 43 ] ) for placing separated ens and aps . without loss of generality , we consider aps and show in fig . [ 68 ] the minimum net energy harvesting rate in ( [ 14 ] ) achieved by algorithm [ 43 ] when the locations of aps are jointly optimized with those of different number of ens ( ) .evidently , a larger indicates better system performance . for the proposed alternating optimization algorithm ( altopt ), we show both the performance with and . besides , we also consider the following benchmark placement methods * cluster center method ( cc ) : separate the wds into clusters and place an en at each of the cluster centers .similarly , separate the wds into clusters and place the aps at the cluster centers ; * optimize only en locations : the aps are placed at the cluster centers ; while the en placement is optimized based on the ap locations using algorithm [ 41 ] . *local searching algorithm ( ls ) method introduced in section iv , where the initial en and ap locations are set according to the cc method and the best - performing deployment solution obtained during the searching iterations is used . ).,scaledwidth=60.0% ] evidently , we can see that the proposed alternating optimization has the best performance among the methods considered .specifically , significant performance gain is observed for altopt over optimizing en placement only .the ls method has relatively good performance compared to altopt , especially when is small , but the performance gap increases with due to the increasing probability of being trapped at local maximums with a larger .the cc scheme has the worst performance as it neglects the disparity of energy harvesting / consumption rates among the wds and precludes the case where multiple ap / ens can be placed in a cluster . in practice ,[ 68 ] can be used to evaluate the deployment cost of each algorithm .for instance , when is required , we see that the altopt ( ) on average needs ens , the ls method needs ens , optimizing en placement only requires ens , and the cc method needs ens , with the same number of information aps deployed ( i.e. , ) .the above results show that , when the ens and aps are separated , significant performance gain can be obtained by jointly optimizing the placements of ens and aps , especially for large - size wpcns that need a large number of ens and aps to be deployed .in addition , as optimizing only en locations corresponds to a special case of algorithm [ 43 ] with , we can see that the performance gain is significant when increases from to .however , the performance improvement becomes marginal as we further increase from to . in practice ,good system performance can be obtained with relatively small number of alternating optimizations , e.g. , in our case ., as a function of ( a ) under fixed ; and ( b ) as a function of under fixed .,scaledwidth=60.0% ] we then show in fig . [64 ] the convergence rate of algorithm [ 42 ] , for which the convergence is proved in an asymptotic sense in the appendix . in particular, we plot the average number of iterations ( wd - ap association assumptions ) used until the algorithm converges . here, we investigate the convergence rate when either the number of aps ( ) or wds ( ) varies . withfixed in fig .[ 64](a ) , we see that the number of iterations used till convergence does not vary significantly as increases . similarly in fig .[ 64](b ) , with a fixed , we do not observe significant increase of iterations when increases from to .besides , all the simulations performed in fig .[ 64 ] use at most iterations to converge .therefore , we can safely estimate that the number of iterations used till convergence is of constant order , i.e. , , which leads to the complexity analysis of algorithm [ 43 ] in section iv.c and algorithm [ 44 ] in section v.c .next , we evaluate in fig . [ 69 ] the performance of the proposed algorithm [ 44 ] for co - located ens and aps , where the value of achieved by algorithm [ 44 ] is plotted against the number of haps used ( ) . in particular ,we compare its performance with that of ls ( with cluster centers as the initial searching points ) and the cc placement method , i.e. , the haps are placed at the cluster centers .we can see that the proposed greedy algorithm in algorithm [ 44 ] has the best performance among the methods considered . nonetheless , the performance gaps over the ls and cc methods are relatively small compared to that in fig .an intuitive explanation is that the doubly - near - far phenomenon for co - located en and ap renders the optimal haps placement to be around the cluster centers . by comparing fig . [68 ] and fig .[ 69 ] , we can see the evident performance advantage of using separated ens and aps over haps .for instance , the achieved by ens and aps in fig . [ 68 ] is , while that achieved by haps ( equivalent to ens and aps being co - located ) is only around .although the greedy algorithm and the ls method perform closely , they differ significantly in the computational complexity . to better visualize the growth rate of complexity , we plot in fig . [ 70 ] the normalized cpu time of the two methods , where each point on the figure is the normalized against the cpu time of the respective method when .clearly , we can see that the complexity of the greedy algorithm increases almost linearly with , where the cpu time increases approximately times when increases from to .this matches our complexity analysis for algorithm [ 44 ] in section v.c that the complexity increases almost linearly in when is much larger .the ls method , however , has a much faster increase in complexity with , where the cpu time increases by around times when increases from to .therefore , even in a large - size wpcn with large , the computation time of the proposed greedy algorithm is still moderate , while this may be extremely high for the ls method , e.g. , couple of minutes versus several hours for . finally , we present a case study to compare the cost of node placement achieved by using either separated or co - located ens and aps . here, we consider a wpcn with wds uniformly placed within the box region , where the detailed placement is omitted due to the page limit .for the case of separated ens and aps , we use algorithm [ 43 ] to enumerate pairs that can satisfy a given net energy harvesting performance requirement , and select the one with the minimum cost as the solution . for the case of haps , we use algorithm [ 44 ] to find the minimum that can satisfy the performance requirement .a point to notice is that , the obtained deployment solutions are sub - optimal to the min - cost deployment problems with either separated or co - located ens and aps , i.e. , the cost of the optimal deployment solution can be lower , because algorithms [ 43 ] and [ 44 ] are sub - optimal to solve problems ( [ 9 ] ) and ( [ 14 ] ) , respectively . more effort to further improve the solution performance is needed for future investigations . and in ( b ) , scaledwidth=60.0% ] in fig .[ 72](a ) , we show the minimum deployment costs achieved by the two methods under different performance requirement .the number of nodes used by both methods are also marked in the figure . with , we can see that using separated ens and aps can achieve much lower deployment cost than co - located haps . in another occasion in fig .[ 72](b ) , the two scheme achieve similar deployment cost when the cost of a hap is decreased from to .we also observe that , by allowing the ens and aps to be separately located , we need much less energy / information access points than they are co - located to achieve the same performance , thanks to the extra freedom in choosing both the numbers and locations of energy / information access points .for instance , when , we need separated energy / information access points ( and ) , while co - located energy / information access points by the haps. however , we do not intend to claim that using separated ens and aps is better than the co - located case .rather , we show that a cost - effective deployment plan can be efficiently obtained using the proposed methods . in practice , the choice of using either separated or co - located ens and aps depends on a joint consideration of the node deployment costs , the network size and the performance requirement .in this paper , we studied the node placement optimization problem in wpcn , which aims to minimize the node deployment cost while satisfying the energy harvesting and communication performance of wds .efficient algorithms were proposed to optimize the node placement of wpcns with the ens and information aps being either separated and co - located . in particular , when ens and aps are separately located , simulation results showed that significant deployment cost can be saved by jointly optimizing the locations of ens and aps , especially for large - size wpcns with a large number of ens and aps to be deployed . in the case of co - located ens and aps ( haps ) , however , the performance advantage of node placement optimization is not that significant , where we observed relatively small performance gap between optimized node placement solutions and that achieved by some simple heuristics , i.e. , placing the haps at the cluster centers formed by the wds . in practice , separated ens and aps are more suitable for deploying wpcns than co - located haps , because of the flexibility in choosing both the numbers and locations of ens and aps . nonetheless , because the optimal solution to the node placement problem has not been obtained in this paper , we may expect further improvement upon our proposed methods in the future , especially for the case of hap placement optimization . finally , we conclude with some interesting future work directions for the node placement problem in wpcn .first , the models considered in this paper can be extended to more general setups . using the uplink information transmission as example, we assumed in this paper that each wd has fixed association with a single ap . in practice, dynamic frequency allocation can be applied to enhance the spectral efficiency , where a wd can transmit to different aps , even multiple aps simultaneously , in different transmission blocks .besides , instead of assuming each ap can serve infinite number of wds , we can allow each ap to serve finite number of wds .in addition , we may also consider the presence of uplink co - channel interference due to frequency reuse in wpcn .the extensions require adding corresponding constraints or changing the expression of energy consumption model in the problem formulation of this paper .second , it is interesting to consider the hybrid node deployment problem that uses both co - located and separated ens / aps .third , it is practically important to consider the node placement problem with location constraints , e.g. , some areas that may forbid the ens / aps to be placed .in addition , the density of ens may be constrained to satisfy certain safety consideration on rf power radiation . [ 59 ] let denote the optimal solutions of ( [ 27 ] ) calculated from the -th ( ) set of assumptions made on the wd - ap associations , denoted by , .let denote the set of wds to which the optimal solution s contradict with the wd - ap assumptions ( we consider only , because otherwise the algorithm has reached its optimum ) , i.e. , according to the proposed trial - and - error method , is set for each as let denote the minimum net energy harvesting rate among all the wds given the updated wd - ap association s and the current ap locations s , i.e. , we can see that because the update of wd - ap associations in ( [ 29 ] ) does not increase the energy consumption rate of any wd achived by assuming , .besides , is a feasible solution of ( [ 27 ] ) under the association assumption s .therefore , the optimal solution calculated from the association assumption s will lead to .in other words , the optimal objective of ( [ 27 ] ) is non - decreasing in each trial - and - error update of wd - ap associations .this , together with the fact that the optimal value of ( [ 27 ] ) is bounded , leads to the conclusion that the proposed trial - and - error method is convergent .a. georgiadis , g. andia , and a. collado , rectenna design and optimization using reciprocity theory and harmonic balance analysis for electromagnetic ( em ) energy harvesting , " _ ieee antennas wireless propag ._ , vol . 9 , pp . 444 - 446 , may 2010a. a. nasir , x. zhou , s. durrani , and r. a. kennedy , wireless - powered relays in cooperative communications : time - switching relaying protocols and throughput analysis , " _ ieee trans ._ , vol .63 , no . 5 , pp .1607 - 1622 , may 2015 .h. chen , y. li , j. l. rebelatto , b. f. uchoa - filho , and b. vucetic , harvest - then - cooperate : wireless - powered cooperative communications , " _ ieee trans . signal process ._ , vol . 63 , no . 7 , pp . 1700 - 1711 , apr . 2015 .y. t. hou , y. shi , h. d. sherali , and s. f. midkiff , on energy provisioning and relay node placement for wireless sensor networks , " _ ieee trans .wireless commun ._ , vol . 4 , no . 5 , pp .2579 - 2590 , sep .2005 .n. michelusi , l. badia , r. carli , l. corradini , and m. zorzi , energy management policies for harvesting - based wireless sensor devices with battery degradation , " _ ieee trans ._ , vol .61 , no . 12 , pp . 4934 - 4947 , dec . | the applications of wireless power transfer technology to wireless communications can help build a wireless powered communication network ( wpcn ) with more reliable and sustainable power supply compared to the conventional battery - powered network . however , due to the fundamental differences in wireless information and power transmissions , many important aspects of conventional battery - powered wireless communication networks need to be redesigned for efficient operations of wpcns . in this paper , we study the placement optimization of energy and information access points in wpcns , where the wireless devices ( wds ) harvest the radio frequency energy transferred by dedicated energy nodes ( ens ) in the downlink , and use the harvested energy to transmit data to information access points ( aps ) in the uplink . in particular , we are interested in minimizing the network deployment cost with minimum number of ens and aps by optimizing their locations , while satisfying the energy harvesting and communication performance requirements of the wds . specifically , we first study the minimum - cost placement problem when the ens and aps are separately located , where an alternating optimization method is proposed to jointly optimize the locations of ens and aps . then , we study the placement optimization when each pair of en and ap are co - located and integrated as a hybrid access point , and propose an efficient algorithm to solve this problem . simulation results show that the proposed methods can effectively reduce the network deployment cost and yet guarantee the given performance requirements , which is a key consideration in the future applications of wpcns . wireless power transfer , wireless powered communication networks , energy harvesting , network planning , node placement optimization . |
this study shoots out from two inspiring sources : progress in genetic reconstructions of our species history and experience in accumulating and analysis rich dental data on living and fossil human populations .the innovation is in combining both advantages to provide new knowledge on eurasian ancestry .principles of anthropophenetics determine the basic approach to the research and allow using methods of population genetics .phenes ( discrete irreducible morphological traits ) yield to genes in number and in marking precision but win in extent of genome covering .dental traits provide the best possibility to examine directly time records in populations .mapping applications can detect many different patterns hidden in numerous tabled data ; each pattern seems to have a certain historical content .computer maps provide both analysis and visualization of the enormous volume of data accumulated in dental anthropology .the study is the first experience of this sort .the study involves data from 498 samples , 50257 individuals in total , drawn from living populations in eurasia and africa .the material was taken from a great number of publications , the major its part is presented in two generalizing books by ( , ) ; the other part is our own recent data on populations of the caucasus ( 86 samples ) , the far east ( 3 samples ) and on the russians ( 27 samples ) . to process this rich information the universal system of analysis , visualization and mapping of dental data _ eurasia _has been developed by the authors .all dental data on living and fossil eurasian populations available to this moment are managed by the mysql relational database .data refer to 830 populations , 32 dental traits , no less than 120 dental phenes as several grades or discrete variations of a trait , and 12 historical periods from the palaeolithic to the present .all the data operations including statistics and visualization are implemented as routines written in python . in several cases c modules are used to improve the performance .the basic statistics are executed using the principal component analysis ( pca ) method with the help of pca module algorithms ( http://folk.uio.no/henninri/pca_module/ ) .mapping of separate dental markers frequencies and pc scores is accomplished via the matplotlib basemap toolkit ( http://matplotlib.sourceforge.net/basemap/doc/html/ ) .the whole system is managed through web interface built within django framework ( http://www.djangoproject.com/ ) allowing to handle the database , to generate dynamic graphics and to save it in vector or bitmap formats .the study program is common in the russian federation and includes 32 non - metric dental traits ( , ; ) .but the real situation is that only few markers are usually presented in published tables of frequencies , so we had to find reasonable balance between number of populations and number of markers involved into pca .thus the numbers are 498 and 8 respectively . in total ,143 phenogeographical maps have been created , but in the present short paper only 4 of them , i.e. mapping scores of the four pcs , are overviewed and discussed .the maps were constructed by interpolating the pc score distribution with the gaussian as a weight function .we have adopted the following parameters for constructing maps in the case of living populations : the averaging window , the weight function range and the total number of grid knots 50,400 .small black points indicate the location of the populations under investigation . only aboriginal groups are investigated .conclusions derived from maps interpretation are often ultimately compiled on the basis of how authors envisage their data fit with established genetic , archaeological or linguistic theories .we try to make such conclusion in the most independent way and on the basis of experience in dental anthropology and anthropophenetics .only this approach can provide really new knowledge .the pca results are presented in table [ table - pca ] .= 0.8ex .the loadings of 8 dental traits in the first 4 pc scores . [ cols="<,^,^,^,^",options="header " , ] the weight of a certain trait is defined as loadings for a corresponding normalized -transformed ( arcsine - transformed ) frequency in the linear combination specifying the component .pca was applied to the among - group dental variability .the longitudinal variability of phene pool in eurasian populations seems to be the most important regularity revealed by mapping and pca ( fig.[1pc ] ) .the geographical factor provides the main contribution to the revealed diversity .the 1 pc explains 53% of the total phenetic variation .all populations under investigation are divided into two main provinces : the western area with high pc1 scores and the eastern one with low scores .several scenarios of different time series could determine this pattern .the larger is the space embracing populations that share similar frequencies , the deeper is the time in their divergence .so , we can suggest the most ancient scenario in the history of eurasian populations was developing from two perspective different groups .africa presented by populations from ethiopia and the republic of mali joins the western province .the map shows clines of evident phene flows from near east north - eastward to siberia .it could be backgrounded to intensive post - neolithic expansion or to any earlier events .it s the matter for further research .another flow can be easily traced from east to west along the steppe belt of the continent .it is explained by the latest ( early medieval ) expansion from inner asia evoking oscillatory migratory waves in population settled along the steppe belt , thus comprising a complicated system in populational interaction .the kalmyks in the south of east europe is the western final point in this expansion .the contact zone between the provinces occupies the urals , west siberia , middle asia and india . regarding the split in two main provincesit should be noted that this phenomenon in eurasia can be traced since homo erectus .indeed , archaic western forms show a low grade of shoveling and poor differentiation in odontoglyphical patterns on molars versus extremely developed shoveling and richness in odontoglyphics in the eastern province ( , pp.196197 ) . chronology and dynamics in different morphological systems evolutionseem to be rather independent , while dental characteristics demonstrate much antiquity in phylogeny and provide a direct bridge from the present to the far past inaccessible for cranial traits , as our recent research on ancient and living caucasian populations showed .it is worth to mention that both western and eastern forms of homo erectus had five - cusped and six - cusped lower molars , their gracilization is a peculiar characteristic of homo sapiens , still eastern living populations keep higher frequencies in five - six - cusped lower molars .it is difficult to ignore these most important data provoking an assumption of the replacement in hominines in the west of the continent and of the hetero - level assimilation in the eastern province .the 2 pc explains 13% of the total phenetic variation ( fig.[2pc ] ) .the map shows more the latitudinal variability of phene pool in eurasian populations .the scores of the pc are high in dravidian and munda groups of india , in other indian and some far east populations as well as in many populations in the south of west asia and in the north of europe and siberia .in fact , the 2 pc presents the paradox combination of eastern ( the distal trigonid crest on the first lower molar ) and western markers ( four - cusped lower molars , precisely the first one ) . in our previous study on the caucasian populations we suggested both southern and northern gracile subsets in west eurasiahad developed from one ancestral eastern group .the pattern on the map supports this assumption . for the first time we find the traces of the initial group in the east province .we can envisage the movement of this ancient group from south asia to the east and to the west , subsequent splitting the west flow into northern and southern subsets , probably as a result of populating postglacial continental space .the 3 pc explains 9% of the total phenetic variation ( fig.[3pc ] ) .high scores of the 3 pc are determined by loadings mainly in western traits ( carabelli cusp and four - cusped lower first molars ) .we ve revealed this combination for west living and fossil populations of the continent in our previous investigation of the caucasus in the anthropohistorical space of eurasia .the wide spread of this component through the whole continent seen in the map [ 3pc ] was an absolutely unexpected and amazing phenomenon .it provides evidence for wide scale ancient populating the continent by a group or close groups mainly from west to east .maybe this event could happen at different times and maybe repeatedly .the 4 pc describes 7% of the total phenetic variation .this pc presents again a combination of eastern and western traits ( distal trigonid crest and carabelli cusp ) , and it covers south regions of eurasia .the revealed landscape more obviously divides the continent into southern and northern halves .the northern zone is occupied by another mixed combination ( deflecting wrinkle and four - cusped first lower molars ) common to finn - ugric populations ( ; ) .the map ( fig.[4pc ] ) again detects hidden patterns and more wide traces of this combination in southern , central and eastern areas of the continent .the highest scores of the last combination are found in some marginal coastline and in central mountain populations of eurasia .the map traces the dispersal of a branch of this combination to the west via the middle urals and its subsequent irradiation in the european territory .it must be emphasized that two of the four pcs are composed by combinations of western and eastern markers .diminishing eastern traits frequencies at different grades in different groups of west continental populations and worldwide in shoveling is a discovered phenomenon .so , we can suggest that ancestral polymorphic or assimilated populations should have even more expressed eastern component .evolutionary factors , including genetic drift , selection and gene flows , may have altered the patterns of phenetic frequency and distribution in existing populations . however , the time depths of the revealed landscapes are still not known exactly , so associating them with particular historic and demographic events seems to be speculative at the moment . to provide clear dating additional studies in integrating with established genetic , archaeological and linguistic evidence should be launched . since dental markers provide the best possibility to examine directly time records in populations ,an alternative perspective is in phenetic investigations of fossil eurasian groups .east europe and adjacent areas rich in fossil data seem to be the region to start with .in spite of the enormous territory and the revealed divergence the populations of the continent have undergone wide scale and intensive time - space interaction .the maximal phenetic diversity was detected in india , respectively lesser in north europe , west siberia and near east .many details in the revealed landscape could be backgrounded to different historical events .the maps visualize the most important results in analysis : the wide spread of the western combination through the whole continent till the pacific coastline and the envision of the dispersal of the paradox combinations of eastern and western markers from south or central asia to the east and to the west . taking into account that no additional eastern combinations in the total variation in asian groups have been found but mixed or western markers sets and that eastern dental characteristics are traced in asia since homo erectus , the choice between the ancestral polymorphism and the hetero - level assimilation in the eastern province is made in favour of the latter .the study was supported by the russian foundation of basic research , grant 08 - 06 - 00124 .the authors are grateful to prof .elena balanovska , research centre for medical genetics , russian academy of medical sciences , for the initial support of the idea of this research ; and to petr voitsik , astro space center , the lebedev physical institute of the russian academy of sciences , for his valuable aid in software development ; and , last but not least , to the organizers and the participants of the 2010 meeting inqua - seqs for the inspiring communication .kashibadze v.f . , 2006 .the caucasus in the anthropohistorical space of eurasia .southern scientific centre , russian academy of sciences publishing house , rostov - on - don , russia ( in russian , with english summary ) .keita b. , 1977 .the anthropology of the republic of mali population . ph.d .thesis , institute of ethnology and anthropology , russian academy of sciences , moscow , russia ( in russian ) .mizoguchi y. , 1985 . shovelling : a statistical analysis of its morphology .university of tokyo press , japan .shinkarenko v.s . ,naumkin v.v . ,, zoubov a.a . , 1984 .the anthropological investigations on the sokotra island .sovetskaya etnografia 4 , 53 - 62 ( in russian ) .zoubov a.a . , 1968odontology : methods in anthropological research .nauka , moscow ( in russian ) .zoubov a.a . ,ethnical odontology .nauka , moscow ( in russian ) .zoubov a.a ., khaldeeva n.i . , 1989 .odontology in current anthropology .nauka , moscow ( in russian ) . | on the base of advantages in gene geography and anthropophenetics the phenogeographical method for anthropological research is initiated and experienced using dental data . statistical and cartographical analyses are provided for 498 living eurasian populations . mapping principal components supplied evidence for the phene pool structure in eurasian populations and for reconstructions of our species history on the continent . the longitudinal variability seems to be the most important regularity revealed by principal components analysis ( pca ) and mapping proving the division of the whole area into western and eastern main provinces . so , the most ancient scenario in the history of eurasian populations was developing from two perspective different groups : western group related to ancient populations of west asia and the eastern one rooted by ancestry in south and/or east asia . in spite of the enormous territory and the revealed divergence the populations of the continent have undergone wide scale and intensive time - space interaction . many details in the revealed landscapes could be backgrounded to different historical events . the most amazing results are obtained for proving migrations and assimilation as two essential phenomena in eurasian history : the wide spread of the western combination through the whole continent till the pacific coastline and the envision of the movement of the paradox combinations of eastern and western markers from south or central asia to the east and to the west . taking into account that no additional eastern combinations in the total variation in asian groups have been found but mixed or western markers sets and that eastern dental characteristics are traced in asia since homo erectus , the assumption is made in favour of the hetero - level assimilation in the eastern province and of net - like evolution of our species . dental markers , pca , mapping , eurasia |
a chemical reaction system that is self - sustaining and collectively autocatalytic is believed to represent an important step in the emergence of early life .these systems are defined by two properties : ( i ) each molecule can be built up from a small subset of pre - existing ` food ' molecules by some reaction in the system , and ( ii ) each reaction is catalysed by some product of another reaction ( or an element of the food set ) .moreover , recent experimental work has demonstrated at least the possibility ( and viability ) of such sets .it is also of interest to develop a mathematical framework that allows us to study the entire universe of possible self - sustaining autocatalytic sets , so that general results can be established , and predictions made . here, we further explore one approach ( ` raf theory ' ) which has provided a tractable and incisive tool for addressing computational and stochastic questions .raf theory grew out of two strands : stuart kauffman s pioneering work on random autocatalytic networks from the 1970s and 1980s , and analysis of the first emergence of cycles in random directed graphs by bollobas and rasmussen .both of these earlier studies were explicitly motivated by origin - of - life considerations .the approach is related to , but different from chemical organisation theory ( cot ) and other formal approaches of a similar flavour , which include petri nets , rosen s ( m ; r ) systems , and eigen and schuster s hypercycle theory . in earlier work , we have established a series of results concerning the structure , discovery and probability of the formation of raf sets in a variety of catalytic reaction systems .when such a system contains a self - sustaining autocatalytic set ( an ` raf ' , defined below ) , this set can often be broken down into smaller rafs until we arrive at the smallest ` building block ' rafs that can not be broken down any further ( _ c.f . _ ) . in this paper , we investigate the structure of these irreducible rafs , and bounds on the size of the smallest rafs within a catalytic reaction system . along the way , we derive some new facets of raf theory , exploring further its relationship to cot , and the related weaker notions of pseudo - rafs and co - rafs , which can be co - opted by a raf to form a larger raf system . while it is easy to determine whether a chemical reaction system contains an raf ( in which casethere is a unique largest one ) , we prove that finding a smallest raf is an np - hard problem .nevertheless , the structure of the smallest ( ` irreducible ' ) rafs allows us to present efficient algorithms to find lower bounds on their size , and to determine whether a given collection contains the smallest raf in the system .we begin by recalling some definitions before proceeding to the combinatorial and algorithmic aspects of rafs .we then apply mathematical arguments and simulations to study the size and distribution of irreducible rafs in kauffman s random binary polymer model , and show that at a level of catalysis at which rafs first form , small rafs are highly unlikely .we end with a short discussion .to formalize the notion of a chemical reaction system ( crs ) , the following basic notation and definitions are useful : * let be a set of _ molecule types _ : each element represents a different type of molecule .* let be a _ food set _, containing molecule types that are assumed to be freely available in the environment .* let be a chemical _ reaction _ , transforming a set of _ reactants _ ( molecule types ) into a set of _ products _ ( molecule types ) . in principle there is no restriction on the number of reactants or products ,although in the specific model we use ( see below ) and are at most two .* let be a set of ( chemically possible ) reactions .* let and denote , respectively , the set of all reactants of and the set of all products of , and for any subset of , let and .* let be a _catalysis set _, i.e. , if the molecule - reaction pair then molecule type catalyses reaction . a _ chemical reaction system _ ( or , equivalently , a _ catalytic reaction system_ ; crs ) is now defined as a tuple consisting of a set of molecule types , a set of ( possible , or allowed ) reactions , and a catalysis set .based on , we can visualise a crs as a _ reaction graph _ with two types of vertices ( molecules and reactions ) and two types of directed edges ( from molecules to reactions and vice versa , and from catalysts to the reactions they catalyse ) .informally , a subset of reactions is an raf ( reflexively - autocatalytic and -generated ) set if it satisfies the following property : 1 cm every reactant of every reaction in can be built up by starting from and using just reactions in , and so that all reactions are eventually catalysed by at least one molecule that is either a product of some reaction in or is an element of . to define an autocatalytic set more formally , we first need to define the notion of `` closure ''informally , the closure of a set of molecule types relative to a set of reactions , is the initial set of molecule types together with all the molecule types that can be created from it by repeated application of reactions from the given set of reactions .more formally , given a crs , the _ closure _ of relative to is the ( unique ) minimal set that contains and satisfies the condition that , for each reaction ( with being a set of reactants and a set of products ) , notice that when the set is still defined , and it equals .our mathematical definition of raf sets is now as follows ( note that this is the definition from , which is slightly modified from the original definition in ) . given a crs and a food set , a non - empty subset is said to be : * _ reflexively autocatalytic _ if , for all reactions , there is at least one molecule type such that ; * _ -generated _ if ; * _ reflexively autocatalytic and -generated _ ( raf ) for if is both reflexively autocatalytic and -generated .because the union of rafs for is also an raf for it follows that any crs that contains an raf has a unique maximal raf called the ` maxraf ' ; any other raf is called a ` subraf ' of this maximal raf .we say that an raf is an _ irreducible raf _ ( or , more briefly , an ` irrraf ' ) if no proper subset is also an raf .in contrast to the uniqueness of the maximal raf , there may be many ( indeed exponentially many ) irrrafs .we have already defined the concept of being-generated , however , it will be useful to explore this further for the following reasons : * to better understand the distinction between rafs and ` pseudo - rafs ' ( defined shortly ) ; * to explain the link between -generated sets and ` organisations ' in chemical organisation theory ; * to provide a characterisation that we will require later in the proof of our main stochastic theorem ( theorem [ nosmall ] ) . given a crs and a food set , the closure set has two further equivalent descriptions .firstly , it is the intersection of all subsets of that contain and that are closed relative to .it also has an explicit constructive definition as follows : is the final set in the sequence of nested sets where is equal to the union of and the set of products of reactions in whose reactants lie in , and where is the first value of for which . with this in hand , we now examine the definition of -generated sets of reactions more closely .recall from the earlier definitions that a subset of reactions is -generated provided that every reactant of every reaction in lies in .note that saying is -generated implies but is * strictly stronger * than the condition that the reactant of each reaction in is either a molecule in or it is a product of another reaction in .-generated is also strictly stronger than requiring that the molecules of that are ` used up ' in maintaining the reactions in is precisely .an example that demonstrates both these strict containments is provided in fig .[ figure_x ] for the set , which is not -generated ( since .we now provide precise characterizations of when a set of reactions is -generated .[ lemequiv2 ] given a crs , a food set and a non - empty subset of , the following are equivalent : * is -generated .* .* has a linear ordering so that the reactants of are molecules in , and for each the reactants of are contained in * has a linear ordering so that for each each reactant of is either an element of or is a product of some reaction where . _proof : _ the equivalence is from ( lemma 4.3 ) and the equivalence is easily verified , as the ordering of that applies for either part , also works for the other ( from the definitions ) .thus , to establish this four - way equivalence , it suffices to show that , and . to establish ,suppose that is -generated .we construct an ordering satisfying as follows : let denote the reactions in that have their reactants in , and for let denote the reactions in that have their reactants in the set , where is the sequence of nested sets described in the preamble to this lemma . then take any ordering on for which the reactions in all come before for .this ordering satisfies the property described in part . to establish , we only need to observe that for all , so if is a subset of the first set , it is necessarily a subset of the second set .this completes the proof of lemma [ lemequiv2 ] .we now point out a consequence of this lemma that sheds some light on why the subset in fig .[ figure_x ] fails to be -generated . given a crs , a food set and a subset of , consider the directed graph that has vertex set and an arc from to precisely if there is a reactant of that is a product of and , in addition , if .this last condition states that molecule can not be built up from using only the reactions in that do not include .note that a vertex of is permitted to have a loop ( i.e. an arc from a reaction to itself ) . as an example of this graph , for the reactions shown in fig .[ figure_x](a ) , is a directed three - cycle , while in part ( b ) of that figure , has no directed cycle .[ corol ] given a crs , a food set , a non - empty subset of is -generated if and only if the following two conditions hold : * every reactant of a reaction in is either an element of or is a product of some reaction in ; and * the graph has no directed cycle ( including loops ) . _proof : _ suppose that is -generated .then condition ( a ) in the theorem follows by part ( ii ) of lemma [ lemequiv2 ] ; moreover , there exists an ordering of that satisfies the condition described in part ( iii ) of that lemma .now , if is an arc in , we must have , since otherwise , if , part ( iii ) of lemma [ lemequiv2 ] gives : and the containment would preclude the arc from .so , if had a directed cycle , we would have : , a contradiction .thus if is -generated , condition ( b ) in the theorem also holds .conversely , suppose that satisfies conditions ( a ) and ( b ) .we first show that there exists a reaction that has all its reactants in , i.e. .suppose to the contrary that this were not the case ( we will show this contradicts condition ( b ) ) .then for every reaction in , we can select a molecule that is a reactant of .moreover , by property ( a ) and the condition that it follows that is the product of some other reaction , which we will write as .thus , starting with any given reaction , , consider the alternating sequence of molecules and reactions that we generate from by setting and .since is finite , this sequence must have for some .moreover , we can not have for all ] and so we would obtain a directed cycle in , and by part ( b ) , no such cycle exists .this contradiction ensures there exists some molecule for ] for the strictly positive vector ^t ] .thus : ^k.\ ] ] now , the value of for which is bounded above by for some value dependent only on ( by [ theorem 4.1 ] , and [ proposition 8.1 ] ) .thus : ^k.\ ] ] now , by lemma [ lemequiv2 ] , any set of reactions is -generated if and only if the reactions can be linearly ordered so that every reaction in the sequence has its reactants provided either from or from the products of earlier reactions in the sequence ( or both ) .therefore , is bounded above by the collection of ordered sequences where , for all : * is a cleavage or ligation reaction involving one or two ( respectively ) molecules of ( taking ) .now , each reaction in the sequence creates , at most , two new molecules , and so for all .since , we have for all : ) , the number of possible choices for to satisfy condition ( * ) above is , at most : since the first term in this sum is an upper bound on the number of possible ligation reactions , while the second term is an upper bound on the number of cleavage reactions .combining this with ( [ ups ] ) gives the following upper bound on the number of sequences satisfying ( * ) . \leq \left[(|f|+2k)(n+|f|+2k)\right]^k \leq ( n+|f|+2k)^{2k},\ ] ] and so applying this inequality to ( [ peq ] ) , with the asymptotic equivalence , gives : ^k.\ ] ] notice that we can provide an upper bound for the term on the right by the expression : ^k = \theta/(1-\theta),\ ] ] where $ ] .it follows that if for , then ( and thereby ) converges to zero as , and therefore so too does the expression for the probability in ( [ sumeq ] ) .this completes the proof .* comments * * this result is interesting in the light of theorem 11 of , as the probability that the length of a first cycle is when a first cycle appears in a random digraph is , and so short cycles have considerable probability in that model .+ by contrast , when the first rafs appear , there are no small ones , since any raf requires the simultaneous satisfying of two properties : it must be reflexively autocatalytic and also -generated ; the former property is equivalent to the existence of a directed cycle in the catalysis graph ( at least in the case for ) ; while there might be a small cycle , it is unlikely to be -generated .* theorem [ nosmall ] provides an interesting complement to the earlier theorem [ npthm ] , which showed that there is , in general , no efficient way to determine the size of the smallest raf in a crs .thus , it could be difficult to exclude the possibility a small raf in the binary polymer model for large values of , by searching for the smallest irrrafs .however , theorem [ nosmall ] provides a theoretical guarantee that , with high probability , there will be no small rafs when they first appear within this model . *the final inequality in the proof of theorem [ nosmall ] allows us to place explicit bounds on the likely minimal size of rafs for finite values of .for example , for , the probability that there exists an raf of size 1000 when the existence of an rafs has a probability of 0.5 is less that 0.01 ( taking and the conservative value for of from theorem 4.1(ii ) of ) . *it is easy to show that when the rate of catalysis becomes sufficiently large , we will expect to find small rafs in the binary polymer model .thus the initially largely flat line for irrraf sizes in fig .[ fig : rafsize ] must eventually decrease to small values ( in the limit of size 1 ) as the rate of catalysis continues to increase .moreover , small catalytic reaction systems ( of size 16 ) that form rafs ( and which contain even smaller rafs ) have recently been discovered in real rna replicator systems . that such small sets form rafs can be partly explained by the high catalysis rate . with theorem [ npthm ] above , we proved that finding the smallest ( irr)raf set is a hard problem , so we can not hope to have a polynomial time algorithm to do this .however , it is still possible to get an idea of the distribution of the sizes of the irrraf sets that exist inside an raf set .this can be done as follows . in , we described a polynomial time algorithm for finding one possible irrraf in a given raf by removing one reaction from and applying the raf algorithm to the set .if this results in an empty set ( ) , then reaction is essential and needs to remain in . otherwise ,replace by the non - empty subraf .now repeat this procedure with every next reaction in until all reactions have been considered .the result of this is an irrraf of .this algorithm was used to generate the data on irrraf sizes in fig . [fig : rafsize ] .note that the particular irrraf in that is found by this algorithm depends on the order in which the reactions are considered for possible removal .so , by repeating the above algorithm a number of times and randomly re - ordering the reactions in each time , we can generate a sample of irrrafs of .[ fig : irrraf ] shows two histograms of the sizes of 1000 irrrafs generated this way from two of the raf sets that were found at a level of catalysis of about , i.e. , when raf sets are just starting to show up . in both cases ,the sample is dominated by one particular irrraf size , with the rest being relatively close in size , although the histogram on the right shows a case where the smallest irrraf is about 100 reactions smaller than the dominant one . since this is only a random sample ,there is no guarantee that this is indeed the _ smallest _ irrraf .however , the fact that even the smallest irrraf in these samples is still rather large ( close to 600 reactions ) is probably a good indication that , indeed , there are no small rafs when they just start appearing .raf theory provides a way to address one aspect of the complex question , how did life arise ?the existence of rafs does not represent a sufficient condition , but it would seem to be a necessary one . moreover , the approach is sufficiently general that it can be applied to other emergence phenomena both inside chemistry and in quite disparate fields ( for an application to a ` toy ' problem in economics , see ) .rafs are based on two key ideas every molecule must be able to be built up from the available set of ` food ' molecules by reactions from the set , and each reaction must ` eventually ' be catalysed . here ` eventually ' refers to the fact that some reactions may need to proceed uncatalysed ( at a lower rate ) in order to get the system going , but eventually , all reactions are catalysed .a stronger requirement would be that all reactions must be catalysed by the available molecules as the system develops ( from the food molecules or products of reactions that have already occurred ) .this notion of a ` constructively autocatalytic -generated ' ( caf ) set from seems an unnecessarily strong condition ( since reactions can generally proceed , at a lower rate , without catalysis ) and the mathematical properties of cafs ( and the probability that they form ) are quite different from rafs .a weaker requirement is that only some reactions need to be catalysed this fits perfectly easily within the current raf framework , as we may simply formally allow a food molecule to act as a putative catalyst for those reactions .another weakening of the raf concept is to consider a closed chemical reaction system , which , once established , will continue to be self - maintaining . this underlies the notion of an ` organisation ' in chemical organisation theory .the property of rafs of being -generated was shown in to imply the property of being an organisation ; we have shown here that the converse need not hold in other words , an organisation may not be able to be built starting just with the food set , without the presence of some other reactant to get it started .this property of an organisation has a superficial similarity to the property that a raf can allow one or more some reactions to proceed uncatalysed until the catalyst is formed .however , there is an important difference , since an uncatalysed reaction can proceed ( at a lower rate ) , while this a reaction that lacks one of its reactants can not take place .the focus of this paper has been on small rafs , as these are , in some sense , the ` simplest ' systems that could be of interest in origin - of - life studies .it is of interest to know whether within some crs that harbours an raf , there is a very small one present , or instead whether all subrafs are quite large .the smallest rafs are irreducible , though not all irreducible rafs are of the smallest size .in contrast to the maximal rafs , where there is a unique object ( maxraf ) that can be constructed in polynomial time ( by the raf algorithm ) , there may be exponentially many irreducible rafs , and finding a smallest raf is , in general , np - hard .nevertheless , we can find irrrafs in polynomial time , and we can describe computable lower bounds on the size of irrrafs and also determine if a given ( small ) collection comprises all the irrrafs .it is also of interest to consider the size and distribution of rafs in simple settings such at the binary polymer model , where simulations suggest that when rafs first appear , small irrrafs are unlikely , a result that has been verified formally in theorem [ nosmall ] .however , as the level of catalysis increases , one is guaranteed to eventually find small irrrafs .an interesting problem for future work would be to develop better bounds and approximations for the minimal size of a raf within a catalytic reaction system . for example , is it possible to obtain a bound for the size the smallest raf that is within some constant factor of optimal ?it would also be of interest to investigate an extension of rafs that allow some molecules to not only catalyse some reactions , but also to inhibit other reactions ; in this case determining whether an analogue of an raf exists within an arbitrary crs has been shown to np - hard , but in certain cases the raf algorithm can be adapted to solve this problem .we thank the _ allan wilson centre for molecular ecology and evolution _ for helping fund this work .99 ashkenasy , g. , jegasia , r. , yadav , m. and ghadiri , m.r .design of a directed molecular network . pnas .101(30 ) : 1087210877 .bang - jensen , j. and gutin , g. ( 2001 ) .digraphs : theory , algorithms and applications .springer - verlag .bollobas , b. and rasmussen , s. ( 1989 ) .first cycles in random directed graph processes .discrete mathematics 75 : 5568 .bonchev , d. and mekenyan , o. ( 1994 ) .graph theoretical approaches to chemical reactivity .centler , f. , kaleta , c. speroni di fenizio , p. and dittrich , p. ( 2008 ) . computing chemical organisations in biological networks .bioinformatics 24 ( 14 ) : 16111618 .contreras , d.a ., pereira , u. , hernndez , v. , reynaert , b. and letelier , j.c .( 2011 ) . a loop conjecture for metabolic closure . in : advances in artificial life ,ecal 2011 : proceedings of the eleventh european conference on the synthesis and simulation of living systems .mit press ; pp . 176183 .dittrich , p. and speroni di fenizio , p. ( 2007 ) .chemical organisation theory .bulletin of mathematical biology .69 : 11991231 .dowling , w and gallier , j. ( 1984 ) .linear - time algorithms for testing the satisfiability of propositional horn formulae .journal of logic programming , 1(3):267284 .dyson , f.j .( 1982 ) . a model for the origin of life .journal of molecular evolution . 18 : 344360 .eigen , m. , schuster , p. ( 1977 ) .the hypercycle : a principle of natural self - organisation .part a : emergence of the hypercycle .naturwissenschaften 64 , 541565 .garey , m. r. and johnson , d. s. ( 1979 ) .computers and intractability : a guide to the theory of np - completeness .hayden , e.j . , von kiedrowski , g. and lehman , n. ( 2008 ) .systems chemistry on ribozyme self - construction : evidence for anabolic autocatalysis in a recombination network .angewandte chemie international edition , 120 : 8552 - 8556 .hordijk , w. and steel , m. ( 2004 ) .detecting autocatalyctic , self - sustaining sets in chemical reaction systems .journal of theoretical biology 227(4 ) : 451 - 461 .hordijk , w. , kauffman , s. , and steel , m. ( 2011 ) . required levels of catalysis for the emergence of autocatalytic sets in models of chemical reaction systems .international journal of molecular sciences ( special issue : origin of life 2011 ) 12 : 30853101 .hordijk , w. , steel , m. and kauffman , s. ( 2012 ) .the structure of autocatalytic sets : evolvability , enablement , and emergence .acta biotheoretica 60 : 379392 .hordijk , w. and steel , m. ( 2012 ) .autocatalytic sets extended : dynamics , inhibition , and a generalization .journal of systems chemistry , 3:5 ( in press ) .hordijk , w.and steel , m. ( 2012 ) . a formal model of autocatalytic sets emerging in an rna replicator system .submitted to journal of systems chemistry ( arxiv:1211.3473 ) .jaramillo , s. , honorato - zimmer , r. ; pereira , u. , contreras , d. , reynaert , b. , hernndez , v. , soto - andrade , j. , crdenas , m. , cornish - bowden , a. ; letelier , j. ( 2010 ) .( m , r ) systems and raf sets : common ideas , tools and projections . in proceedings of the alife xii conference , odense , denmark ,1923 august , 2010 ; pp . 94100 .kauffman , s.a .cellular homeostasis , epigenesis and replication in randomly aggregated macromolecular systems .journal of cybernetics 1(1 ) , 7196 .kauffman , s.a .autocatalytic sets of proteins .journal of theoretical biology 119 , 124 kauffman , s.a .the origins of order .oxford university press .kreyssig , p. , escuela , g. , reynaert , b. , veloz , t. ibrahim , b. , dittrich , p. ( 2012 ) .cycles and the qualitative evolution of chemical systems . plos 7(1 ) : e45772 .lee , d.h . ,severin , k. , ghadiri , m.r .autocatalytic networks : the transition from molecular self - replication to molecular ecosystems .current opinion in chemical biology , 1 : 491496 .letelier , j.c ., soto - andrade , j. , abarzua , f.g ., cornish - bowden , a. , crdenas , m.l .organizational invariance and metabolic closure : analysis in terms of ( m;r ) systems .journal of theoretical biology 238 , 949961 .mossel , e. and steel , m. ( 2005 ) .random biochemical networks and the probability of self - sustaining autocatalysis .journal of theoretical biology 233(3 ) , 327 - 336 .oparin , a.i .the pathways of the primary development of metabolism and artificial modeling of this development in coacrvate drops . in fox ,the origins of pre - biological systems and of their molecular matrices .sharov , a. ( 1991 ) .self - reproducing systems : structure , niche relations and evolution .biosystems , 25 : 237249 .sievers , d. and von kiedrowksi , g. ( 1994 ) .self - replication of complementary nucleotide - based oligomers .369 : 221224 .taran , o. , thoennessen , o. , achilles , k. and von kierdoswki . g. ( 2010 ) .synthesis of information - carrying polymers of mixed sequences from double stranded short deoxynunleotides .journal of systems chemistry , 1(9 ) .steel , m. ( 2000 ) .the emergence of a self - catalysing structure in abstract origin - of - life models . applied mathematics letters 13 ( 3 ) : 91 - 95 .vadhan , s.p .the complexity of counting in sparse , regular , and planar graphs .siam journal on computing 31 : 398 427 .vaidya , n. , manapat , l. , chen , i.a . and xulvi - brunet , r. ( 2012).spontaneous network formation among cooperative rna replicators . nature , 491 : 72 - 77 .vasas v. , fernando c. , santos m. , kauffman s. , szathmry e. ( 2012 ) .evolution before genes .biology direct , 7:1 .wchterhuser , g. ( 1990 ) .evolution of the first metabolic cycles .proceedings of the national academy of sciences usa 87 : 200 - 204 .zhang , h. and stickel , m. ( 1996 ) .an efficient algorithm for unit - propagation . in proceedings of the fourth international symposium on artificial intelligence and mathematics ( ai - math96 ) , fort lauderdale ( florida usa ) .to establish , suppose that and that is an raf for . is an raf for , where with , and so is certainly an raf for .furthermore , since .hence is a co - raf for . to establish that is a co - raf for .then there exists an raf for , such that and is an raf for .consider . since is an raf for andis a subset of , we must have .it remains to show that is an raf for .suppose that .then either , in which case all the reactants of and at least one catalyst are contained in ( since is an raf for ) , while if then all the reactants of and at least one catalyst is contained in ( since is an raf for ) .now , and are both subsets of , and so and are both subsets of .consequently , every reaction in has all its reactants and at least one catalyst in , which implies that is an raf for . to establish ,suppose that is a co - raf for .then there exists an raf for , such that and is an raf for .trivially , and clearly , since is non - empty by the definition of a co - raf , so take and . to establish ,suppose that is a co - raf for .then there exists an raf for such that and is an raf for .it suffices to show that is an raf for , where .first we prove that is generated from : i.e. . let . is -generated , so there exists an ordering of its reactions satisfying part ( iv ) of lemma [ lemequiv2 ] for the food set .we herein refer to an ordering satisfying part ( iv ) of lemma [ lemequiv2 ] for some food set as a _ proper ordering relative to f_. is -generated so there exists a proper ordering relative to , , of its reactions . define to be the ordering of the reactions of obtained by deleting from every reaction that also appears in , preserving the order of the remaining reactions .we claim that the concatenation , a reordering of , is a proper ordering relative to .consider any reaction and a reactant . is -generated and , so by part ( ii ) of lemma [ lemequiv2 ] at least one of the following holds : ( i ) , ( ii ) for some , or ( iii ) for some .if ( i ) alone is true , trivially does not prevent the reordering from being a proper ordering relative to .if ( ii ) alone is true , every precedes in , so certainly does not prevent the reordering from being a proper ordering relative to .if ( iii ) alone is true , must precede in and the order of the reactions of in is preserved in , so precedes in .if more than one of ( i)-(iii ) are true , then since is a proper ordering , at least one of the conclusions will hold , which is sufficient .therefore our claim that is a proper ordering relative to is justified .it follows that , and for each , , so moreover alone is a proper ordering relative to .then , by the implication in lemma 3.1 , is generated from .it remains to show that is reflexively autocatalytic .since and are -generated , we can apply part ( ii ) of lemma [ lemequiv2 ] to and to deduce that they are equal .now , since is reflexively autocatalytic then certainly is reflexively autocatalytic ( by definition ) . to establish , it suffices to show that is an raf for , since we already have that is an raf for and .first we prove that is -generated . is -generated , so there exists a proper ordering relative to of its reactions .similarly for there exists a proper ordering relative to of its reactions .hence the concatenation is a proper ordering relative to of the reactions in , so is -generated .it remains to show that is reflexively autocatalytic .since and are each -generated , we can apply part ( ii ) of lemma [ lemequiv2 ] to each of , and to deduce that .now since and are reflexively autocatalytic then certainly is reflexively autocatalytic .this completes the proof . _proof : _ min - raf is clearly in the complexity class np , since one can verify in polynomial time if a given subset of has size , at most , and forms an raf .we will reduce the graph theory problem vertex cover to min - raf . recall that for a graph , a _vertex cover _ of is a subset of with the property that each edge of is incident with at least one vertex in ; vertex cover has as its instance a graph and an integer and we ask whether or not has a vertex cover of size , at most , .this is a well - known np - complete problem ( indeed , one of karp s original 21 np - complete problems ) .given an instance of vertex cover , we show how to construct an instance , of min - raf for which the answers to the two decision problems are identical .we first construct and .for each , let be two distinct elements of and let be an element of .order as and for each , let be a distinct element of and an element of . let be another distinct element of .thus consists of the elements : and consists of elements : for each , define a reaction : for each , define the reaction : and for , let : for any subset of let : and set * if ( where ) then is catalysed by both and ( but by no other molecules ) . * in addition , each reaction is catalysed by and by no other molecule - we call the molecule the _ super - catalyst_. * is an raf for . *a subset of is an raf for if and only if for a vertex cover of .* the vertex covers of of size are in one - to - one correspondence with the sub - rafs of of size . to establish the second claim ,suppose that is a vertex cover of .then every reaction in is catalysed by the product of least one reaction in .moreover , the product of catalyses all the remaining reactions .thus , is reflexively autocatalytic , and it is also clear that is -generated ; thus is an raf and it has reactions .conversely , suppose that is an raf for of size at most .if is not in then the super - catalyst is not produced by any reaction in so none of the reactions in is catalysed ; moreover , because the products from these last reactions provide the only catalysts for it follows that .thus , since is non - empty ( being an raf ) , must be an element of , and in order to construct the reactants of , all the reactions must form a subset of . in order for all these reactions to be catalysed , at least one of the reactions and must lie in for each .thus is a vertex cover of and it has size , at most , as claimed .this establishes the required reduction , and thereby completes the proof of the second claim .the third claim follows by the noting that the association maps vertex covers of of size onto sub - rafs of of size ( by the previous claim ) and two different vertex covers are mapped to distinct sub - rafs .this completes the proof .part ( i ) of theorem [ npthm ] now follows from the first two claims , while part ( ii ) of theorem [ npthm ] follows from the third claim , combined with the # p - completeness of counting vertex covers of a graph and minimum vertex covers of a graph ( see ) .* remark : * we have ensured in the proof above that each reaction has just two reactants , in line with the binary polymer model . however , the attentive reader will notice that may have to be quite large .nevertheless , it is quite straightforward to modify this example so that is kept small ( e.g. of size 6 ) , and to implement the construction within the constraints of the binary polymer cleavage ligation model . | self - sustaining autocatalytic chemical networks represent a necessary , though not sufficient condition for the emergence of early living systems . these networks have been formalised and investigated within the framework of raf theory , which has led to a number of insights and results concerning the likelihood of such networks forming . in this paper , we extend this analysis by focussing on how _ small _ autocatalytic networks are likely to be when they first emerge . first we show that simulations are unlikely to settle this question , by establishing that the problem of finding a smallest raf within a catalytic reaction system is np - hard . however , irreducible rafs ( irrrafs ) can be constructed in polynomial time , and we show it is possible to determine in polynomial time whether a bounded size set of these irrrafs contain the smallest rafs within a system . moreover , we derive rigorous bounds on the sizes of small rafs and use simulations to sample irrrafs under the binary polymer model . we then apply mathematical arguments to prove a new result suggested by those simulations : at the transition catalysis level at which rafs first form in this model , small rafs are unlikely to be present . we also investigate further the relationship between rafs and another formal approach to self - sustaining and closed chemical networks , namely chemical organisation theory ( cot ) . catalytic reaction system , random autocatalytic network , origin of life 1 cm `` individual chemical reactions in living beings are strictly coordinated and proceed in a certain sequence , which as a whole forms a network of biological metabolism directed toward the perpetual self - preservation , growth , and self - reproduction of the entire system under the given environmental conditions '' oparin ( 1965 ) |
how do real - world networks evolve with time ?while empirical studies provide many intuitions and expectations , many questions remain open . in particular , we lack tools to characterize and quantitatively compare temporal graph dynamics . in turn ,such tools require good observables to quantify the ( temporal ) relationships between networks .in particular , the few network dynamics models that currently exist are often _ oblivious of the network type_. this is problematic , as complex networks come in many different flavors , including social networks , biological networks , or physical networks .it seems highly unlikely that these very different graphs evolve in a similar manner .a natural prerequisite to measure evolutionary distances are good metrics to _ compare _ graphs .the classic similarity measure for graphs is the _ graph edit distance ( ) _ : the graph edit distance between two graphs and is defined as the minimal number of _ graph edit operations _ that are needed to transform into .the specific set of allowed graph edit operations depends on the context , but typically includes node and link insertions and deletions .while graph edit distance metrics play an important role in computer graphics and are widely applied to pattern analysis and recognition , is not well - suited for measuring similarities of networks in other contexts : the set of graphs at a certain graph edit distance from a given graph exhibit very diverse characteristics and seem unrelated ; being oblivious to semantics , the does not capture any intrinsic structure typically found in real - world networks . a similarity measure that takes into account the inherent structure of a graphmay however have many important applications .a large body of work on graph similarities focusing on a variety of use cases have been developed in the past ( see our discussion in section [ sec : relwork ] ) .depending on the context in which they are to be used , one or another is more suitable . in particular, we argue that graph similarities and graph distance measures are also an excellent tool for the analysis , comparison and prediction of temporal network traces , allowing us to answer questions such as : _ do these two networks have a common ancestor ? _ _ are two evolution patterns similar ? _ or _ what is a likely successor network for a given network ?_ however , we argue that in terms of graph similarity measures , there is no panacea : rather , graphs and their temporal patterns , come with many faces . accordingly , we in this paper , propose to use a parametric , _ centrality - based approach _ to measure graph similarities and distances , which in turn can be used to study the evoluation of networks .more than one century ago , camille jordan introduced the first graph _ centrality _ measure in his attempt to capture `` the center of a graph '' . since then the family of centrality measures has grown larger and is commonly employed in many graph - related studies .all major graph - processing libraries commonly export functionality for degree , closeness , betweenness , clustering , pagerank and eigenvector centralities . in the context of static graphs ,_ centralities _ have proven to be a powerful tool to extract meaningful information on the structure of the networks , and more precisely on the _ role _ every participant ( node ) has in the network . in social network analysis ,centralities are widely used to measure the importance of nodes , e.g. , to determine key players in social networks , or main actors in the propagation of diseases , etc . today, there is no consensus on good " and bad " centralities : each centrality captures a particular angle of a node s topological role , some of which can be either crucial or insignificant , depending on the application ._ am i important because i have many friends , because i have important friends , or because without me , my friends could not communicate together ? _the answer to this question is clearly context - dependent . in this paper, we argue that the perceived quality of network similarities or distances measuring the difference between two networks depends on the focus and application just as much . instead of debating the advantages and disadvantages of a set of similarities and distances , we provide a framework to apply them to characterize network evolution from different perspectives .in particular , we leverage centralities to provide a powerful tool to quantify network changes .the intuition is simple : to measure how a network evolves , we measure the change of the nodes roles and importance in the network , by leaving the responsibility to quantify node importance to centralities .[ [ our - contributions ] ] our contributions + + + + + + + + + + + + + + + + + this paper is motivated by the observation that centralities can be useful to study the dynamics of networks over time , taking into account the individual roles of nodes ( in contrast to , e.g. , isomorphism - based measures , as they are used in the context of anonymous graphs ) , as well as the context and semantics ( in contrast to , e.g. , graph edit distances ) .in particular , we introduce the notion of _ centrality distance _ for two graphs , a graph similarity measure based on a _ node centrality _ .we demonstrate the usefulness of our approach to identify and characterize the different faces of graph dynamics . to this end , we study five generative graph models and seven dynamic real world networks in more details .our evaluation methodology comparing the quality of different similarity measures to a random baseline using data from actual graph evolutions , may be of independent interest .in particular , we demonstrate how centrality distances provide interesting insights into the structural evolution of these networks and show that actual evolutionary paths are far from being random .moreover , we build upon the centrality distance concept to construct dynamic graph signatures .the intuition is simple : we measure the probability of an update to be considered as an outlier compared to a uniformly random evolution .this allows us to quantify the deviation of a given dynamic network from a purely random evolution ( our null - model ) of the same structure for a set of centrality distances .the signature consisting of the resulting deviation values enables the comparison of different dynamisms on a fair basis , independently from scale and sampling considerations .[ [ examples ] ] examples + + + + + + + + to motivate the need for tools to analyse network evolution , we consider two simple examples .* example 1 . * [ local / global scenario ] consider three graphs , , over five nodes : is a line , where and are connected ; is a cycle , i.e. , with an additional link ; and is with an additional link . in this example , we first observe that and have the same graph edit distance to : , as they contain one additional edge .however , in a social network context , one would intuitively expect to be closer to than .for example , in a friendship network a short - range `` _ _ triadic closure _ _ '' link may be more likely to emerge than a long - range link : friends of friends may be more likely to become friends themselves in the future .moreover , more local changes are also expected in mobile environments ( e.g. , under bounded human mobility and speed ) .as we will see , the centrality distance concept of this paper can capture such differences .* example 2 . * [ evolution scenario ] as a second synthetic example , consider two graphs and , where is a line topology and is a `` shell network '' ( see also figure [ fig : methodology ] ) .how can we characterize evolutionary paths leading from the topology to ?note that the graph edit distance does not provide us with any information about the likelihood or the role changes of _ evolutionary paths _ from to , i.e. , on the order of edge insertions : there are many possible orders in which the missing links can be added to , and these orders do not differ in any way when comparing them with the graph edit distance .in reality , however , we often have some expectations on how a graph may have evolved between two given snapshots and . for example , applying the triadic closure principle to our example , we would expect that the missing links are introduced one - by - one , from left to right . to a shell graph . ]the situation may look different in technological , man - made networks .adding links from left to right only slowly improves the `` routing efficiency '' of the network : after the addition of edges from left to right , the longest shortest path is hops , for .a more efficient evolution of the network is obtained by connecting to the furthest node , adding links to the middle of the network , resulting in a faster distance reduction : after edge insertions , the distance is roughly reduced by a factor .thus , different network evolution patterns can be observed in real networks . instead of defining application - dependent similarities with design choices focusing on which evolution patterns are more expected from a certain network , we provide a framework that allows the joint characterization of graph dynamics along different axes .[ [ organization ] ] organization + + + + + + + + + + + + the remainder of this paper is organized as follows .section [ sec : background ] provides the reader with the necessary background .section [ sec : graph - dist ] introduces our centrality distance framework and section [ sec : methodology ] our methodology to study the different graph dynamics empirically . section [ sec : experiments ] reports on results from analyzing real and generated networks . after reviewing related work in section [ sec : relwork ] ,we conclude our contribution in section [ sec : conclusion ] .this paper considers _ labeled _ graphs , where vertices have unique identifiers and are connected via undirected edges . in the following ,we denote as the set of neighbors of node : .a temporal network trace is a sequence $ ] , where represents the network at the snapshot .we focus on _ node centralities _ , a centrality being a real - valued function assigning `` importance values '' to nodes . obviously , the notion of importance is context - dependent , which has led to many different definitions of centralities .we refer to for a thorough and formal discussion on centralities . a _ centrality _ is a function that , given a graph and a vertex , returns a non - negative value .the centrality function is defined over all vertices of a given graph . by convention, we define the centrality of a node without edges to be 0 .we write to refer to the vector in where the element is for a given order of the identifiers .centralities are a common way to characterize networks and their vertices . frequently studied centralities include the _ degree centrality _( ) , the _ betweenness centrality _ ( ) , the _ closeness centrality _ ( ) , and the _ pagerank centrality _ ( ) among many more . a node is -central if it has many edges : the degree centrality is simply the node degree ; a node is -central if it is on many shortest paths : the betweenness centrality is the number of shortest paths going through the node ; a node is -central if it is close to many other nodes : the closeness centrality measures the inverse of the distances to all other nodes ; and a node is -central if the probability that a random walk on visits this node is high .we use the classical definitions for centralities , and the exact formulas are presented in [ appendix : centrality ] for the sake of completeness .finally , throughout this paper , we will define the graph edit distance between two graphs and as the minimum number of operations to transform into ( or vice versa ) , where an _ operation _ is one of the following : link insertion and link removal .the canonical distance measure is the graph edit distance , .however , often provides limited insights into the graph dynamics in practice .figure [ fig : methodology ] shows an example with two evolutionary paths : an incremental ( _ left _ ) and a binary ( _ right _ ) path that go from to . with respect to , there are many equivalent shortest paths for moving from to .however , intuitively , not all traces are equally likely for dynamic networks , as the structural roles that nodes in networks have are often preserved and do not change arbitrarily . clearly ,studying graph evolution with ged thus can not help us to understand how structural properties of graphs evolve .the graph edit distance does not provide much insights into graph evolution .we in this paper aim to enrich the graph similarity measure with semantics . at the heart of our approach lies the concept of _ centrality distance _ : a simple and flexible tool to study the similarity of graphs .essentially , the centrality distance measures the similarity between two centrality vectors .it can be used to measure the distance between two arbitrary graphs , not only between graphs with graph edit distance 1 . given a centrality , we define the centrality distance between any two graphs as the sum of the node - wise difference of the centrality values : thus , the centrality distance intuitively measures the magnitude by which the roles of different nodes change .while we focus on the 1-norm in this paper , the concept of centrality distance can be useful also for other norms . both the importance of node roles as well as the importance of node role changes is application - dependent . due to the large variety of processesdynamic graphs can capture , there is no one - size - fits - it - all measure of importance . to illustrate this point ,let us consider the `` intuitive '' similarity properties proposed by faloutsos et al .for instance , the proposed edge importance property should penalize changes that create disconnected components more than changes that maintain the connectivity properties of the graphs .now imagine a cycle graph of 100 nodes , and a single additional node connected to . according to the proposed edge importance property the most important link is .indeed , it is the only link whose removal would create a disconnected component ( containing alone ) .yet the removal of any other link would double the diameter of the structure . or in an information dissemination networkall nodes would have to update half of their routing tables . so which link is more important ?the answer clearly depends on the context .similar examples can be found for other properties proposed in , e.g. , regarding submodularity and focus - awareness .not only are these properties hard to formalize , their utility varies from application to application .we conclude by noting that given two centralities and and two arbitrary graphs and with nodes , the respective distances are typically different , i.e. , .hence , using a set of different centrality distances , we can explore the variation of the graph dynamics in more than one `` dimension '' .in order to characterize the different faces of graph dynamics and to study the benefits of centrality - based measures , we propose a simple methodology . intuitively , given a centrality capturing well the roles of different nodes in a real - world setting , we expect the centrality distance between two consecutive graph snapshots and to be smaller than the typical distance from to other graphs that have the same ged . to verify this intuition , we define a _ null model _ for evolution .a null model generates networks using patterns and randomization , i.e. , certain elements are held constant and others are allowed to vary stochastically .ideally , the randomization is designed to mimic the outcome of a random process that would be expected in the absence of a particular mechanism . applied to our case, this means that starting from a given snapshot that represents the fixed part of the null model , if the evolution follows a null model , then any graph randomly generated from at the given is evenly likely to appear . concretely ,for all consecutive graph pairs and of a network trace , we determine the graph edit distance ( or `` radius '' ) .then , we generate a set of sample graphs at the same from _ uniformly at random_. that is , to create , we first start from a copy of and select node pairs , , uniformly at random . for each of these pairs add the edge to if it does not exist in or we remove it if it was in originally . such randomly built sample graphs at the same graph edit distance allow us to assess the impact of a uniformly random evolution of the same magnitude from the same starting graph : in other words , is the pattern and the evolution to at graph edit distance is the randomized part of the null model describes results obtained for a null model that guarantees the average degree of in the sample graphs . ] . as a next step ,given a centrality , we compare with the set that samples the evolution following the null model .we consider that does not follow the null model if it is an _ outlier _ in the set for the centrality .practically , is considered an outlier if the absolute value of its distance from minus the mean distance of to is at least twice the standard deviation , i.e. , if given a temporal trace , we define as the fraction of outliers in the trace for centrality .an ensemble of such values for a set of centralities is called a _dynamic signature _ of .based on our centrality framework and methodology , we can now shed some light on the different faces of graph dynamics , using real world data sets . * * caida ( as ) : * this data captures the autonomous systems relationships as captured by the caida project .each of the snapshots represents the daily interactions of the first as identifiers from august 1997 until december 1998 .* * icdcs ( icdcs ) : * we extracted the most prolific authors in the icdcs conference ( ieee international conference on distributed computing systems ) and the co - author graph they form from the dblp publication database ( http://dblp.uni-trier.de ) .this trace contains 33 snapshots of 691 nodes and 1076 collaboration edges .the timestamp assigned to an edge corresponds to the first icdcs paper the authors wrote together . clearly ,the co - authorship graph is characterized by a strictly monotonic densification over time . * * uci social network ( uci ) : * the third case study is based on a publicly available dataset , capturing all the messages exchanges realized on an online facebook - like social network between 1882 students at university of california , irvine over 7 months .we discretized the data into a dynamic graph of 187 time steps representing the daily message exchanges among users .* * hypertext ( ht ) : * face - to - face interactions of the acm hypertext 2009 conference attendees .113 participants were equipped with rfid tags .each snapshot represents one hour of interactions . * * infectious ( in ) : * face - to - face interactions of the `` infectious : stay away '' exhibition held in 2009 .410 participants were equipped with rfid tags .each snapshot represents 5 minutes of the busiest exhibition day . ** manufacture ( ma ) : * daily internal email exchange network of a medium - size manufacturing company ( 167 nodes ) over 9 months of 2010 . * * souk ( sk ) : * this dataset captures the social interactions of 45 individuals during a cocktail , see for more details .the dataset consists of snapshots , describing the dynamic interaction graph between the participants , one time step every seconds . in the network traces.,scaledwidth=70.0% ] figure [ fig : datasets ] provides a temporal overview on the evolution of the number of edges in the network and the between consecutive snapshots .some of the seven datasets exhibit very different dynamics : one can observe the time - of - day effect of attendees interactions on _ hypertext _ , and the day - of - week effect on _manufacture_. _ uci _ , _ hypertext _ , _ infectious _ and _ manufacture _ all exhibit a high level of dynamics with respect to their number of links .this is expected for _ infectious _ , as visitors come and leave regularly and rarely stay for long , but rather surprising for _the density of _ caida _ slowly increases , and with a steady .similarly , the number of co - author edges of _ icdcs _ steadily increases over the years , while the number of new edges per year is relatively stable .the number of days of the conference _hypertext _ and the fact that conference participants sleep during the night and do not engage in social activity is evident in the second trace . the dynamic pattern of the online social network _uci _ has two regimes : it has a high dynamics for the first 50 timestamps , and is then relatively stable , whereas _ souk _ exhibits a more regular dynamics .generally , note that can be at most twice as high as the maximal edge count of two consecutive snapshots . and in dashed red lines and between and 100 graphs with the same ged as and in solid blue lines representing the median , bars in grey .ec : ego centrality , bc : betweenness centrality , cc : closeness centrality , kc : cluster centrality , pc : pagerank centrality.,title="fig : " ] and in dashed red lines and between and 100 graphs with the same ged as and in solid blue lines representing the median , bars in grey .ec : ego centrality , bc : betweenness centrality , cc : closeness centrality , kc : cluster centrality , pc : pagerank centrality.,title="fig : " ] and in dashed red lines and between and 100 graphs with the same ged as and in solid blue lines representing the median , bars in grey .ec : ego centrality , bc : betweenness centrality , cc : closeness centrality , kc : cluster centrality , pc : pagerank centrality.,title="fig : " ] + hypertext infectious icdcs figure [ fig : chronogram ] presents examples of the results of our comparison of random graphs with the same graph edit distance ged as real - world network traces .the red dashed lines represent the centrality distances of and .the distribution of values from to the randomly sampled graphs of is represented as follows : the blue line is the median , while the gray lines represent the outlier detection window . for most graphs under investigation and for most centralitiesit holds that the induced centrality distance between and is often lower than between and an arbitrary other graph with the same distance .there are however a few noteworthy details . _hypertext _ and _ infectious _ exhibit very similar dynamics compared from a ged perspective as shown in figure [ fig : datasets ] . yet from the other centralities perspective , their dynamism is very different .consider for instance _ infectious _ for , where the measured distance is consistently an order of magnitude less than the sampled one .this can be understood from the link creation mechanics : in _ infectious _ , visitors at different time periods never meet . by connecting these in principle very remote visitors ,the null model dynamics creates highly important links .this does not happen in _ hypertext _ where the same group of researchers meet repeatedly .in the monotonically growing co - authorship network of icdcs , we can observe that closeness and ( ego ) betweenness distances grow over time , which is not the case for the other networks in figure [ fig : chronogram ] .when looking at other centrality distances , we observe that even though the local structure changes , a different set of properties remains mostly unaltered across different networks .moreover , for some _ ( graph , distance ) _ pairs , like on _ icdcs _ , on _ hypertext _ , or on _ infectious _ , the measured distance is orders of magnitude lower than the median of the sampled ones .this underlines a clear difference between random evolution and the observed one from this centrality perspective : the link update dynamics is biased .figure [ fig : spider ] summarizes the signatures for applied to 7 real and 5 synthetic graphs in the form of a histogram chart for synthetic graphs , each point is the average of 50 independent realizations of the model , and .that is , each chart represents the probability of having graph evolutions being outliers with respect to the null model for the corresponding centralities .interestingly , this `` distinction ratio '' is not uniform among datasets .on _ caida _ , _ infectious _ and _ uci _ , the ratio is high for local centralities such as _ pagerank _ and _ clustering _ , and low for global centralities such as _ closeness _ or _ betweenness_. on the contrary , _ hypertext _ and _ manufacture _ exhibit large ratios for global centralities and small ratios for local centralities .both local and global centralities perform well on souk .the difference of these behaviors show that these graphs adhere to different types of dynamics . , the probability that is an outlier w.r.t . the null model for the corresponding centrality .synthetic scenarios are depicted in _blue _ , real scenarios in _ red_. the black line at represents the null model , i.e. , the fraction of graphs that are at distance at least from the mean in a normal distribution .synthetic datasets : ba : barabasi - albert , cmhalf : preferential attachment equiprobable nodes and edge events cmlog : preferential attachement node events decay in log , er : erds - rnyi , rr : random regular .real - life datasets : ca : caida , icdcs : icdcs co - authors , uci : online social network of uci , ht : hypertext conference , in : infectious ma : manufacture mails , sk : souk cocktail . ]to complement our observations on real networks with graph snapshots produced according to a model , we investigated graph traces generated by some of the most well - known models : _ erds - rnyi _er , _ random regular _ rr , _ barabasi - albert _ ba and _ preferential attachment _ graphs with an equal number of node and edge events ( cmhalf ) and with the number of node events depending logarithmically on the time ( cmlog ) .perhaps the most striking observation is that all tested dynamic network models have low values for all .this is partly due to the fact that the graph edit distance between two subsequent snapshots is one and thus the centrality vectors do not vary as much as between the snapshots and the sampled graphs of the same graph edit distance for the real networks .moreover , these randomized synthetic models are closer to the null model , and lack some of the characteristics ( like link locality ) of real world networks . furthermore , we observe that each random network model exhibits distinct dynamics signatures , with er being closest to the null model .to the best of our knowledge , our paper is the first to combine the concepts of centralities and graph distances . in the following , we review related work in the two fields in turn , and subsequently discuss additional literature on dynamic graphs . * graph characterizations and centralities .* graph structures are often characterized by the frequency of small patterns called _ motifs _ , also known as _ graphlets _ , or _ structural signatures _ . another important graph characterization , which is studied in this paper , are _ centralities _ .dozens of different centrality indices have been defined over the last years , and their study is still ongoing , with no unified theory yet .we believe that our centrality distance framework can provide new inputs for this discussion . * graph similarities and distances .* graph edit distances have been used extensively in the context of inexact graph matchings in the field of pattern analysis .we refer the reader to the good survey by gao et al .soundarajan et al compare twenty network similarities for anonymous networks .they distinguish between comparison levels ( node , community , network level ) and identify vector - based , classifier - based , and matching - based methods .surprisingly they are able to show that the results of many methods are highly correlated .netsimile allows to assess the similarity between networks , possibly with different sizes and no overlaps in nodes or links .netsimile uses different social theories to compute similarity scores that are size - invariant , enabling mining tasks such as clustering , visualization , discontinuity detection , network transfer learning , and re - identification across networks .the deltacon method is based on the normed difference of node - to - node affinity according to a belief propagation method . more precisely , the similarity between two graphs is the root euclidean distance of their two affinity matrices or an approximation thereof .the authors provide three axioms that similarities should satisfy and demonstrate using examples and simulations that their similarity features the desired properties of graph similarity functions .our work can be understood as an attempt to generalize the interesting approach by faloutsos et al . in , which derives a distance from a normed matrix difference , where each element depends on the relationships among the nodes .in particular , are argue that there is no one - size - fits - it - all measure , and propose an approach parametrized by centralities .interestingly , we also prove that distances derived in our framework satisfy the axioms postulated in . * dynamic graphs . * among the most well - known evolutionary patterns are the shrinking diameter and densification .a lot of recent work studies link prediction algorithms .others focus on methods for finding frequent , coherent or dense temporal structures , or the evolution of communities and user behavior .another line of research attempts to extend the concept of centralities to dynamic graphs .some researchers study how the importance of nodes changes over time in dynamic networks .others define temporal centralities which to rank nodes in dynamic networks and study their distribution over time .time centralities which describe the relative importance of time instants in dynamic networks are proposed in .in contrast to this existing body of work , our goal is to facilitate the direct comparison of entire networks and their dynamics , not only parts thereof .a closely related work but using a different approach is by kunegis .kunegis studies the evolution of networks from a spectral graph theory perspective .he argues that the graph spectrum describes a network on the global level , whereas eigenvectors describe a network at the local level , and uses these results to devise link prediction algorithms. * bibliographic note . * an early version of this work appeared at the acm fomc 2013 workshop .this paper was motivated by the observation that in terms of graph similarity measures , there is no `` one size fits it all '' .in particular , we have proposed a centrality - based distance measure , and introduced a simple methodology to study the different faces of graph dynamics . indeed, our experiments confirm that the evolution patterns of dynamic networks are not universal , and different networks need different centrality distances to describe their behavior .we observe that the edges in networks represent structural characteristics that are inherently connected to the roles of the nodes in these networks .these structures are maintained under changes , which explains the inertia of centrality distance which capture these properties .this behavior can be used to distinguish between natural and random network evolution . after analyzing a temporal network trace with a set of distance centralities, one can guess with confidence for future snapshots if they belong to the trace .we believe that our work opens a rich field for future research . in this paper , we focused on five well - known centralities and their induced distances , and showed that they feature interesting properties when applied to the use case of dynamic social networks .however , we regard our approach as a _ similarity framework _ , which can be configured with various additional centralities and metrics , which may not even be restricted by distance metrics , but can be based on the angles between centrality vectors or use existing correlation metrics ( e.g. , pearson correlation , tanimoto coefficient , log likelihood ) .finally , exploiting the properties of centrality distances , especially their ability to distinguish and quantify between similar evolutionary traces , also opens the door to new applications , such as graph interpolation ( what is a likely graph sequence between two given snapshots of a trace ) and extrapolation , i.e. , for link prediction algorithms based on centralities .the authors would like to thank clment sire for insightful remarks on a previous version of this document .* degree centrality : * recall that is the set of neighbors of a node . the _ degree centrality _is defined as : * betweenness centrality : * given a pair , let be the number of shortest paths between and , and be the number of shortest paths between and that pass through .the _ betweenness centrality _ is : for consistency reasons , we consider that a node is on its own shortest path , i.e. , , and , by convention , . if is not connected , each connected component is treated independently ( ) .* ego centrality : * let be the subgraph of induced by .the _ ego centrality _ is : * closeness centrality : * let be the length of a shortest path between vertices and in .the _ closeness centrality _ is defined as : * pagerank centrality : * let be a damping factor ( e.g. , the probability that a random person clicks on a link ) .the _ pagerank centrality _ of is defined as : * cluster centrality * : the _ cluster centrality _ of a node is the cluster coefficient of , i.e. , the number of triangles in which is involved divided by all possible triangles in s neighborhood . by convention , for , and for . for higher degrees: present here some additional results related to an alternative choice of the null model . as described in the article , we base our methodology on a uniformly random evolutionary null model that is based on the graph edit distance andhence may not preserve some of intrinsic characteristics of networks under study , such as their density . to complete our study , figure [ figure : altnull ]provides the results of applying the methodology described in the article using such an alternative null model .more precisely , we ran the same experiments where the null model is a random process that ensures that the average degree of all sample graphs is the same as for .figure [ figure : orignull ] recalls the results we obtained for the uniformly random null model for comparison . for 4 out of 5 datasets , namely ht ( hypertext conference ) , in ( infectious ), ma ( manufacture mails ) and sk ( souk cocktail ) , results obtained in both cases are very similar . for all networks the dynamic signatures are strong , in the sense that the networks are outliers for many of the studied centralities and the signatures of different networks vary , illustrating their unique evolution paths . as expected , the ability of the presented method to distinguish the real network evolution compared to the networks generated according to the more refined null model decreases for most network traces and centralities .yet , results are strikingly different from the more general null model in the main part of the paper for the case of ca , the caida dataset .caida differs from the other datasets in the sense that it does not directly derive from human activity ( caida captures autonomous systems relationships ) , and the density in this dataset is much higher than in other considered datasets , while the graph edit distance between different snapshots does not vary much . | the topological structure of complex networks has fascinated researchers for several decades , resulting in the discovery of many universal properties and reoccurring characteristics of different kinds of networks . however , much less is known today about the _ network dynamics _ : indeed , complex networks in reality are not static , but rather _ dynamically evolve over time_. our paper is motivated by the empirical observation that network evolution patterns seem far from random , but _ exhibit structure_. moreover , the specific patterns appear to depend on the network type , contradicting the existence of a `` one fits it all '' model . however , we still lack observables to quantify these intuitions , as well as metrics to compare graph evolutions . such observables and metrics are needed for extrapolating or predicting evolutions , as well as for interpolating graph evolutions . to explore the many faces of graph dynamics and to quantify temporal changes , this paper suggests to build upon the concept of centrality , a measure of node importance in a network . in particular , we introduce the notion of _ centrality distance _ , a natural similarity measure for two graphs which depends on a given centrality , characterizing the graph type . intuitively , centrality distances reflect the extent to which ( non - anonymous ) node roles are different or , in case of dynamic graphs , have changed over time , between two graphs . we evaluate the centrality distance approach for five evolutionary models and seven real - world social and physical networks . our results empirically show the usefulness of centrality distances for characterizing graph dynamics compared to a null - model of random evolution , and highlight the differences between the considered scenarios . interestingly , our approach allows us to compare the dynamics of very different networks , in terms of scale and evolution speed . + * keywords : * network dynamics , graph evolution . [ theorem]corollary [ theorem]lemma [ theorem]claim [ theorem]definition [ theorem]example [ theorem]fact |
general relativity teaches us space and time are not independent , but inseparably entangled in a unified _ spacetime_. nevertheless , standard procedure in canonical gravity is to temporarily disregard this lesson , foliating spacetime into spacelike level sets of some time function .this gives an initial value formulation of general relativity , with many uses in classical and quantum gravity .but this approach , depending on an arbitrary , unobservable time function , has strange physical consequences . while the spacetime picture of gravity is described by _equations , the foliation constrains _ global _ spacetime geometry and topology .a well - posed initial value formulation demands global hyperbolicity , which in turn implies constant spatial topology .this leaves many interesting spacetimes , including even anti - de sitter space , with no ` dynamical ' description . herewe present an alternative , _ fully local _ description of gravitational dynamics , based on the notion of a _ field of observers_. this is useful for a geometric understanding of lorentz symmetry in canonical gravity , for relating geometrodynamics with connection dynamics , for linking canonical and covariant phase spaces , and for various possible extensions of general relativity .in minkowski space , an observer with velocity in hyperbolic space has a global notion of ` space , ' namely the subspace orthogonal to : an observer thus naturally splits spacetime fields into spatial and temporal parts . in more general spacetimes ,this picture is valid only ` infinitesimally , ' on each tangent space .a * field of observers * is a unit future - directed timelike vector field any time - oriented lorentzian manifold has .such a field suffices to split fields on a _ background _spacetime into spatial and temporal parts .but here we are interested in general relativity , where the metric and hence the definition of observer is to be determined by the dynamics. can we define ` observers ' without using the metric ?fortunately , the * coframe field * in first - order gravity locally maps spacetime vectors to vectors in , which we view as an * internal spacetime*. this lets us translate between an observer field and a more primitive notion : a field of * internal observers * , assigning an observer to each point of spacetime .right4 cm ( 0,0 ) * ; ( -20,18 ) * ; ( -20,14 ) * ; ( 20,18 ) * ; ( -35,10)*t_xm ; ( -22,-2)*x ; ( 28,0 ) * y(x ) ; ( 35,11)*r^3,1 ; ( -23,-8)*u=``a '' ; `` a'';(-17,-4 ) * * ; ( 22,-10)*r^3_y=``b '' ; `` b'';(18,-6 ) * * ; _ e@/^.2pc/(-14,-3)*;(12,-6 ) * ; ( 0,-17 ) * ; ( -10,-15);(-2,-8 ) * * ; starting with a smooth spacetime manifold , besides the field , we need : * a nowhere - vanishing 1-form , * an -valued 1-form such that + ' '' '' + + is a nondegenerate coframe field . the observer field itself is found by solving .this implies is dual to , so from s perspective , ` space ' is the kernel of . annihilates and so is ` purely spatial . 'all other differential forms similarly split into a * spatial part * annihilating , and a * temporal part * of the form . in particular , the spin connection is given by where the spatial 1-form and the scalar both live in .these constructions are clearly analogous to how spacetime fields are built in adm gravity . in this language ,classical field equations split neatly into spatial equations constraining ` initial values ' of the fields and temporal equations corresponding to dynamics .for example , the spatial part of the vacuum einstein equation is + \xi\ , d^\perp \hat u \right ) = 0\ ] ] where the * spatial differential * depends on the lie derivative along , and =d^{\perp}\omega+\omega\wedge\omega \ker(\hat u) { { \rm so}}(3,1) { { \rm so}}(3)_y$ } } \ ] ] a local lorentz transformation changes these splittings , but changes also the rotation group , so all fields transform consistently . as an example , consider the part of the spatial connection , , where projects onto . under a local lorentz transformation of , we get : so that lives in . similarly , the -valued ` triad ' transforms to take values in . under transformations living in , and have just the right behavior for a spatial connection and triad. one can show that they generalize ashtekar variables , in the real form due to barbero . in the ashtekar - barbero formulation , the apparent breaking of lorentz symmetry down to arises by fixing and therefore the subgroup once and for all . for us ,this breaking occurs ` spontaneously ' : at each spacetime point , selects the subgroup . by transforming along with the dynamical variables ,the action of the full lorentz group is maintained . breaking symmetry spontaneously has two nice side - effects .first , it sidesteps second class constraints that must be dealt with in related connection - based approaches .second , it makes the pair into a ` spatial cartan connection , ' making a precise link between ` geometrodynamics ' and ` connection dynamics . 'see our papers for details .in foliation - based approaches , ` time evolution ' is a particular 1-parameter family of spacetime diffeomorphisms : the flow generated by the vector field , moving each spatial slice into the future by intervals of the arbitrary ` time ' function : ( 150,100 ) ( 5,0)(20,20)(10,50)(10,50)(0,80)(10,100 ) ( 130,0)(135,20)(130,50)(130,50)(120,80)(130,100 ) ( 25,0)(1,3)8(50,0)(0,3)10(75,0)(-1,3)4(100,0)(1,3)7 ( 25,25)(0,3)12(50,25)(1,3)6(75,25)(2,3)8(100,25)(2,3)12(125,25)(0,3)17 ( 25,50)(-1,3)4(35,50)(0,1)10(50,50)(1,1)12(75,50)(1,1)17(100,50)(1,3)7 ( 25,75)(1,3)4(50,75)(2,3)12(75,75)(1,3)7(100,75)(2,3)13 ( 135,20)(133,35)(128,65) ( 14,25)(1,0)118(14,32)(25,39)(56,43)(56,43)(70,43)(83,37)(83,37)(95,34)(112,43)(112,43)(118,47)(125,42)(125,42)(127,40)(132,36 ) ( 13,40)(25,49)(56,55)(56,55)(70,55)(83,59)(83,59)(95,63)(112,62)(112,62)(120,62)(126,67 ) an observer field also generates a flow representing ` time evolution ' .since is normalized , this flow is parameterized by _ proper time _ of the observer field : ( 150,100 ) ( 5,0)(20,20)(10,50)(10,50)(0,80)(10,100 ) ( 130,0)(135,20)(130,50)(130,50)(120,80)(130,100 ) ( 25,0)(1,3)5(50,0)(0,3)14(75,0)(-1,3)5(100,0)(1,3)5 ( 25,25)(0,3)14(50,25)(1,3)5(75,25)(2,3)13(100,25)(2,3)13(125,25)(0,3)15 ( 25,50)(-1,3)5(35,50)(0,1)14(50,50)(2,3)13(75,50)(2,3)13(100,50)(1,3)5 ( 25,75)(1,3)5(50,75)(2,3)13(75,75)(1,3)5(100,75)(2,3)13 ( 30,0)(40,40)(50,50)(50,50)(55,58)(67,72)(67,72)(70,74)(80,100 ) ( 52,0)(55,30)(72,50)(72,50)(80,60)(95,75)(95,75)(100,81)(108,100 ) ( 85,0)(90,25)(105,50)(105,50)(118,70)(125,100 ) thus , while we have no global notion of space , there is a canonical way to push the _ whole spacetime _forward by one second of proper time of the observers .but how do we define _ phase space _ without a foliation ?the * covariant phase space * of general relativity is its space of solutions , a natural covariant generalization of the ` canonical ' phase space .however , not dividing spacetime into space and time , it lacks any obvious link to the conceptual picture of spatial configurations changing in time .the observer - based formulation could provide this link .on one hand , if we choose an observer field corresponding to a foliation , we recover canonical gravity . on the other , everything transforms covariantly under change of observer , a local gauge choice .adjoining the observer field gives us a covariant phase space in which spatial and temporal variables are clearly distinguished , without spoiling local lorentz symmetry .in general relativity , just as there is no canonical spacelike foliation , there is no canonical choice of observer .faced with such a situation , rather than making an arbitrary choice , one can _ simultaneously consider all possible choices ._ individual choices are arbitrary ; the _ space _ of choices is canonical . on the other hand , * observer space * , the space of all possible observers , has manifest physical meaning , and simple topology : it is a 7-dimensional manifold isomorphic to the ` unit future tangent bundle ' of spacetime , locally a product of spacetime with velocity space . in reformulate general relativity directly on observer space , essentially by pulling fields back along the natural projection a connection pulled back to observer space will be flat in the ` velocity ' directions , reflecting the symmetry under changes of observer .general relativity respects this symmetry , but does _ nature _ ? as we and all our instruments are ` observers , ' we can not probe spacetime geometry directly in any observer - independent way .the empirical evidence for symmetry under a change of observer could be challenged by future observations .several modifications of general relativity currently of interest can be studied using observer space .first , there is a growing interest in models that do not treat spacetime isotropically . since points in observer spacecorrespond to directions in spacetime , these anisotropic theories might be described very naturally in these terms .perhaps more compelling is the question of whether spacetime itself plays any fundamental role in physics .once we have lifted the theory to observer space , do we still have any need for spacetime ?in fact , starting with observer space , we can _reconstruct spacetime_but only when certain ` integrability conditions ' hold .the ` relative locality ' proposal suggests the notion of spacetime itself may be _ observer - dependent_. observer space provides a natural perspective from which to study this possibility , with the potential to move beyond ` special ' and on to ` general ' relative locality .r. arnowitt , s. deser , and c. w. misner , the dynamics of general relativity , in _ gravitation : an introduction to current research _ , edited by l. witten ( wiley , new york , 1962 ) .reprint available as http://arxiv.org/abs/gr-qc/0405109[arxiv:gr-qc/0405109 ] . c. crnkovi and e. witten , covariant description of canonical formalism in geometric theories , in _ three hundred years of gravitation _ , edited by s. w. hawking and w. israel ( cambridge university press , cambridge , 1987 ) .m. j. gotay , j. isenberg , j. e. marsden , and r. montgomery , momentum maps and classical relativistic fields .part i : covariant field theory , http://arxiv.org/abs/physics/9801019[arxiv:physics/9801019 ] .j. barbour , shape dynamics .an introduction , to appear in proceedings of the conference quantum field theory and gravity ( regensburg , 2010 ) , http://arxiv.org/abs/1105.0183/[arxiv:1105.0183 ] ; h. gomes , s. gryb , and t. koslowski , einstein gravity as a 3d conformally invariant theory , _ class ._ * 28 * , 045005 ( 2011 ). http://arxiv.org/abs/1010.2481/[arxiv:1010.2481 ]. g. amelino - camelia , l. freidel , j. kowalski - glikman , and l. smolin , relative locality : a deepening of the relativity principle , second prize in the 2011 essay competition of the gravity research foundation , http://arxiv.org/abs/1106.0313/[arxiv:1106.0313 ] . | hamiltonian gravity , relying on arbitrary choices of ` space , ' can obscure spacetime symmetries . we present an alternative , manifestly spacetime covariant formulation that nonetheless distinguishes between ` spatial ' and ` temporal ' variables . the key is viewing dynamical fields from the perspective of a _ field of observers_a unit timelike vector field that also transforms under local lorentz transformations . on one hand , all fields are spacetime fields , covariant under spacetime symmeties . on the other , when the observer field is normal to a spatial foliation , the fields automatically fall into hamiltonian form , recovering the ashtekar formulation . we argue this provides a bridge between ashtekar variables and covariant phase space methods . we also outline a framework where the ` space of observers ' is fundamental , and spacetime geometry itself may be observer - dependent . essay written for the gravity research foundation + 2012 awards for essays on gravitation . |
the authors would like to thank v. barocas , t. stylianopoulos , e. a. sander , e. tuzel , h. zhang , h. othmer , t. jackson , p. smereka , r. krasny , f. mackintosh , a. kabla , r. lee , m. dewberry , l. kaufman and l. jawerth , and for their discussions .this work is supported by nih bioengineering research partnership grant r01 ca085139 - 01a2 and the institute for mathematics and its applications . | we study the micromechanics of collagen - i gel with the goal of bridging the gap between theory and experiment in the study of biopolymer networks . three - dimensional images of fluorescently labeled collagen are obtained by confocal microscopy and the network geometry is extracted using a 3d network skeletonization algorithm . each fiber is modeled as a worm - like - chain that resists stretching and bending , and each cross - link is modeled as torsional spring . the stress - strain curves of networks at three different densities are compared to rheology measurements . the model shows good agreement with experiment , confirming that strain stiffening of collagen can be explained entirely by geometric realignment of the network , as opposed to entropic stiffening of individual fibers . the model also suggests that at small strains , cross - link deformation is the main contributer to network stiffness whereas at large strains , fiber stretching dominates . since this modeling effort uses networks with realistic geometries , this analysis can ultimately serve as a tool for understanding how the mechanics of fibers and cross - links at the microscopic level produce the macroscopic properties of the network . while the focus of this paper is on the mechanics of collagen , we demonstrate a framework that can be applied to many biopolymer networks . collagen is the most abundant animal protein and its mechanics have been studied in great detail . it takes on many morphologies , including skin , tendons , ligaments , individual fibers , and gels . of particular interest is the mechanics of collagen - i gels , shown in figure [ fig : summary]a . these gels provide a relatively simple structure that can be noninvasively observed by confocal microscopy and used as a scaffold for growing artificial tissues , and as a 3d environment for studying cell motility and tumor invasion . a critical first step in understanding these systems is to develop a model for the collagen gel alone . in this paper we give a successful theoretical model of the micromechanics of realistic networks . collagen - i gels belong to a class of materials known as biopolymers . other examples include actin , found in the cytoskeleton and fibrin , a component of blood clots . a common feature of biopolymer networks is their ability to strain stiffen by 2 - 3 orders of magnitude at large strains ( figure [ fig : summary]f ) . the cause of this strain stiffening is not well understood . storm et al . attributed strain stiffening in all biopolymer networks to the pulling out of entropic modes of individual filaments . their calculation required the assumption that deformations are affine . later , heussinger et al . showed how one could deconstruct the network deformation into a set of floppy modes . they concluded that accounting for the non - affinity was necessary in describing the elastic properties of the network . onck and colleagues have proposed the alternative hypothesis that strain stiffening is due to the rearrangement of the fibers as the network is strained ( figures [ fig : summary]d and [ fig : summary]e ) . resolving this debate has been difficult since almost all theoretical analysis has been on artificially generated networks . the few examples of quantitative comparisons to experiment in the literature are not able to quantitatively fit the full stress - strain response of the gel at varying densities using a single set of parameters . m 25.6 m 25.6 m ) . a ) maximal intensity projection along the axis that is perpendicular to the focal plane of the microscope . b ) projection of 3d network extracted by fire . c ) reduced network , where elements that do nt contribute to the network stiffness have been removed for improved computational efficiency . d ) deformation after 50% tension . e ) deformation after 50% shear . f ) comparison of the stress - strain response between the model and experiment . [ fig : summary],width=288 ] in this paper , we bridge the gap between model and experiment . three dimensional images of fluorescently labeled collagen gels at different densities are imaged by confocal microscopy ( figure [ fig : summary]a ) and the network geometry is extracted using a custom fiber extraction ( fire ) algorithm ( figure [ fig : summary]b ) . the gel is modeled as a random network of cross - linked fibers , as described below , and the stress - strain response is compared to that measured by an ar - g2 rheometer . good agreement between model and experiment is obtained by fitting a single parameter , the cross - link stiffness . the experiments are described in detail in . in the model for the collagen gel , each fiber is treated as a discrete worm - like - chain ( wlc ) which resists stretching and bending , and each cross - link is treated as a torsional spring , thus more stiff than a freely rotating pin joint but less stiff than a welded joint of fixed angle . the stretching modulus of an individual fiber is given by , where is the young s modulus and is the cross - sectional area . the young s modulus of a fiber in aqueous conditions has been estimated to be between 30 - 800 mpa and we use a modulus of 50 mpa , which fits the data well and is also close to the value chosen by stylianopoulous et al . ( 79mpa ) to fit their model . it has been shown that a single fiber will stiffen by a factor of 2 - 4 when strained . we choose here to use a constant both to reduce the number of parameters in the model and to see if geometric reorientation of the network is enough to explain strain stiffening . stylianopoulos and barocas also explored the bilinear and exponential constitutive relations for the individual fibers and observed only minor effects on the macroscopic network behavior . the radius of each fiber is nm . the bending modulus of the fiber is given by pn- , where . no cross - linking agent has been added to the gel and very little is known about the nature of the naturally formed collagen cross - links . we find that we can fit all the data by setting the torsional spring stiffness to pn- m . to compare to , we consider , where the mean cross - link spacing is given by m . thus , we find that . one possible reason for a larger could be an increase in fiber radius near the cross - links by a factor of 2 , since bending stiffness scales by . we assume that in the undeformed state of the network , there are no internal stresses . thus the fibers have an innate curvature and the cross - links have an equilibrium angle equal to that in their initial configuration . we ignore entropic contributions to the fiber mechanics . while the geometric persistence length of these fibers has been measured to be 20 m , the thermal persistence length is much longer cm . furthermore , in the case that the strain stiffening is dominated by thermal compliance , one would expect to see a decrease in the yield strain with increasing concentration . collagen gels , however , have been shown to have a constant yield strain of about 60% for a wide range of concentrations . thus the total energy in the network for a given configuration is given below . here is the number of elements of type , which denotes stretching , bending , and cross - link , is the length of stretching element , and are the bending and cross - link angles respectively , and indicates the difference between the deformed and undeformed state . to calculate the stress - strain relationship of our model network , we perform a series of 18 incremental strain steps by imposing a small deformation on one face , while holding the opposite face , fixed . we impose two types of deformations : tension ( figure [ fig : summary]d ) and shear ( figure [ fig : summary]e ) . in a tensile deformation , we allow the vertices on and to move freely in directions perpendicular to the imposed strain to allow for perpendicular contraction that is seen to occur in these experiments . in experiments of this type , the distance between and is on the order of centimeters and the simulated network represents a small region near the center of a sample . in shear , we do not allow the boundary nodes on and to move freely . we compare the shear results to cone - plate rheometer experiments , where the shear faces are bound to the rheometer . here , the distance between and is 109 m , and the simulated network is one fourth the length of the experimental sample between the boundaries . in both deformations , all other nodes , including those on the four remaining faces of the network , are free to move . the minimum energy state of the network at an imposed strain is found using a conjugate gradient method developed by hager and zhang . the stress required to hold the network in its current configuration is given by , where denotes the area of . the results shown in figures [ fig : strainsweep ] and [ fig : ge ] are averaged over all four extracted networks and over all 6 principle shear deformations in the sheared network and all 3 principle tensile directions in the stretched network . the results from our shear cell experiments are given in figures [ fig : strainsweep ] and [ fig : ge]a . in addition , in figure [ fig : ge]b , we also present the previously reported tensile modulus of large samples that are centimeters in length . figure [ fig : strainsweep]a shows a strain sweep from 0.5% to 100% shear strain in both the model and cone - plate rheometer experiments . at small strains , the stress - strain response is linear , as expected , and at larger shear strains , the stress - strain response appears cubic . in figure [ fig : ge]a , we show that the small strain modulus scales by , where is the collagen density . at this time , it is not possible to verify the power law scaling in the model since only densities of 0.5 , 1.2 , and 1.4 were observed . the fluorescent labeling of the network changes the polymerization properties of the network , causing it to clump at higher densities . we use this scaling relationship to collapse the curves in figure [ fig : strainsweep]b . the close agreement between model and experiment indicates that strain stiffening due to the geometric rearrangement of the collagen fibers is enough to explain the strain stiffening seen in experiments . , and the number denotes the collagen density . a ) unscaled results . note the good agreement between model and data at small and large strains . b ) when the curves are scaled by relatively good data collapse is achieved . we denote lines of slope and to guide the eye . at large strains , scaling breaks down and the low density curves overtake the high density curves because at large strains , stiffness scales linearly with density . thus this rescaling serves mainly as a visualization tool and does not represent a true data collapse . [ fig : strainsweep],width=192 ] is observed for the experiment . b ) the large strain tensile modulus from the model and from the experiments of roeder et al . . results differ by a factor of 2.5 , which is reasonable since the two experimental protocols were different . [ fig : ge],width=288 ] at large strains ( in tension , in shear ) , the stress - strain curve of the model becomes linear again , though with a much steeper slope . in figure [ fig : ge]b , we compare the large strain tensile behavior of the model to the experiments of roeder et al . . while our model underestimates their experimental measurement by a factor of 2.5 , we find this to be reasonable since the two experiments used different collagen protocols . in particular , different buffers were used . in figure [ fig : strainsweep ] , we also explore the case where , such that we have only a network of springs connected at freely rotating pin joints . at low strains , the network can be deformed without exerting any stress , but at strains higher than 25% , we see that this simplification adequately describes the gel . a topic of investigation explored by many is the validity of the assumption that these networks deform affinely . for brevity , we state only that the deformations are highly nonaffine in these simulations , as evidenced by a visual inspection of figure [ fig : summary]d and [ fig : summary]e where it is obvious that many fibers leave the volume defined by an affine deformation . in summary , we have presented a microstructural model of a 3d biopolymer gel using a network geometry that is based on the true network architecture . it differs from previous work in that we use realistic network architectures that have been extracted using the fire algorithm . we specifically focus on the mechanics of collagen - i networks , but emphasize that this modeling approach is generalizable to other biopolymer networks . the model has three parameters : . the fiber radius and tensile modulus can be measured experimentally and the model uses realistic parameters . the cross - link torsional spring constant must be fit to the data and we used . fitting this single parameter gives the right strain stiffening behavior for networks at three different densities at strains that vary from 0.5% to 50% . this result lends support to the hypothesis put forward by onck et al . that strain stiffening in polymer networks , and particularly collagen - i gels , is governed by rearrangement of the gel . another finding of the model is that at strains greater than 25% , the stiffness of the gel is governed almost entirely by stretching of the fibers . this result is relevant for collagen because cells embedded in these gels are seen to produce deformations of this order of magnitude . in modeling large systems of this type where the strains are large , it may be sufficient to treat each fiber as a spring rather than a wlc in order to reduce the computation time . this work also demonstrates that an understanding of the cross - link mechanics in these systems is critical to understanding their mechanical properties , as has been seen previously . in much of the theoretical work that has been done on random stick networks , the cross - links are treated either as freely rotating pin joints , or welded joints of fixed angle . while these are sensible simplifying assumptions in developing a theory , they are not adequate for describing actual networks . we note that this model has been designed to capture the short time scale ( 1 hr ) behavior of the network , where it behaves as an elastic solid . such behavior requires that the cross - links be relatively fixed . this simplified model provides a starting point in the development of a more complete model of collagen gel . ultimately , a more sophisticated approach , such as that taken by rodney et al . will be necessary to capture the full dynamic behavior of the gel , where cross - links are allowed to slip and break . |
devising methods for analyzing and predicting time series is currently considered one of the most important challenges in chaotic time - series analysis ( eg ., see refs .[ 1 - 3 ] ) . in general , chaotic behavior is observed in relation with nonlinear differential equations and maps on manifolds . _times series may be construed as being the projections of manifolds onto coordinate axes_. much work in nonlinear dynamics has focused on the building of appropriate model(s ) of the underlying physical process from a time series , with the objective of predicting the _ near - future _ behavior of dynamical systems .the first step in formulating predictive models is that of specifying / estimating a suitably parameterized nonlinear function of the observation .this is followed by estimating the parameters of this function .in general , prediction models are formulated on the basis of the systematic and accurate identification of a _ working hypothesis _ [ 4 ] .this hypothesis is represented by a set of parameters that form an ansatz .this paper obtains the coefficients of such an ansatz , which possess information about the data set(s ) , via recourse to a fisher information measure ( fim , hereafter ) based inference procedure .the leitmotif for obtaining the _ working hypothesis _ by employing an inference procedure is to formulate a prediction model , based on the famed embedding theorem of takens [ 5 , 6 ] .the conceptual sophistication underlying the takens theorem renders the prediction problem to become an instance of extrapolation .currently , some of the prominent prediction models based on information theory ( it , hereafter ) are : the framework of plastino et .al . [ 7 - 10 ] using the maximum entropy ( maxent , hereafter ) method of jaynes [ 11 ] , and the nonparametric models by principe et .al , ( eg . see [ 12 , 13 ] ) . the work presented herein belongs to a class of models known as _ pseudo - inverse _ models , for reasons described in section 2 and 3 of this paper .such models have been successfully employed to forecasting tasks in a number of disciplines which include nonlinear dynamical systems [ 7 , 8 ] , financial data forecasting [ 8 ] , prediction of tonic - clonic epileptic seizures from real - time electroencephalogram ( eeg ) data [ 9 ] , and even fraud analysis ( the london interbank offered rate ( libor ) manipulations ) [ 10 ] .generally , predictive models are of two types , viz ._ global _ and _ local _ ( see for example ref .global models are based on training data collected from across the phase space . on the other hand , in local models ,the training is accomplished by measurements providing data lying in the immediate vicinity of a specific / localized region of the phase space ._ pseudo - inverse predictive models , including the one presented herein , are essentially global models which possess local characteristics _ [ 10 , 15 ] .time series prediction has its roots in the theory of optimal filtering by wiener [ 16 ] . in recent times ,forecasting of chaotic time series has hitherto largely utilized artificial neural networks ( ann s , hereafter ) and other learning paradigms .commencing from the seminal radial basis function model of casdagli [ 17 ] , some of the notable attempts to study chaotic time series comprise ( but are not limited to ) the time delayed neural network architectures [ 18 ] , recurrent ann s [ 19 ] , maximum entropy ann s [ 20 ] , and support vector machines [ 21 ] . within the perspective of physics - based models ,the works of crutchfield and mcnamara [ 22 ] and farmer and sidorowich [ 23 ] constitute some of the most prominent efforts .fim - based studies have recently been acquiring prominence across a spectrum of disciplines ranging from physics and biology to economics ( for eg . ,see [ 24 ] ) . the prediction model presented in this papercomprises of two phases : the modeling phase and the prediction phase .the task of the modeling phase is to obtain the coefficients of the ansatz that suitably parameterizes the nonlinear function of the observed time series ( see section 2 of this paper ) .this phase establishes the _ working hypothesis _ , and is accomplished with the assistance of the training data .the prediction phase then generates forecasts based on the set of coefficients obtained in the modeling phase .the leitmotif for the fim - based model employed in this paper is two - fold .first , it provides the framework to endow the modeling phase with a quantum mechanical ( qm , hereafter ) connotation .this is in accordance with wheeler s hypothesis of establishing an information - theoretical foundation for the fundamental theories of physics [ 25 ] , and is accomplished by recourse to the minimum fisher information ( mfi , hereafter ) principle of hber [ 26 , 27 ] .variational extremization of the fim subject to least squares constraints results in a sturm - liouville equation in a vector setting , hereinafter referred to as the time independent schrdinger - like equation .consequently , i ) the probability density function ( pdf , hereafter ) of the coefficients of the ansatz , and ii ) the constraint driven pseudo - inverse condition ( that yields the inferred estimate coefficients , fundamental for the _ working hypothesis _ ) , can be specified not only via gaussian ( maxwell - boltzmann ) pdf s [ which are _ equilibrium _ distributions ] , but also in terms of _ non - equilibrium _ distributions [ 24 , 28 - 30 ] , comprising of hermite - gauss polynomials .this greatly widens the scope of the works presented in refs .[ 7 - 10 ] , and is accomplished in this paper with the aid of the qm virial theorem [ 31 , 32 ] for normal distributions .note that in inference problems involving the fim , the gaussian pdf s are obtained as solutions to the lowest eigenvalue by solving the time - independent schrdinger - like equation in section 3 of this paper as an eigenvalue problem , and correspond to the _ ground state _ of the physical schrdinger wave equation ( swe , hereafter ) .further , the non - equilibrium pdf s correspond to the higher - order eigenvalue solutions of such swe , and are linked to _ excited states _ of the physical swe ( see , for eg . [ 33 , 34 ] ) . from a _practical _ perspective , this enables the performance of the modeling phase and the concomitant prediction phase to be systematically categorized in terms of an established physics - based framework .next , the reciprocity relations and the legendre transform structure ( lts , hereafter ) , together with the concomitant information theoretic relations for the fim , in a vector setting and for least squares constraints , are derived .prior studies have derived reciprocity relations and lts for the fim model [ 35 ] and have analyzed such relations [ 36 - 39 ] .recently , these works have been qualitatively extended to the case of the relative fisher information ( rfi , hereafter ) [ 40 - 42 ] by venkatesan and plastino by deriving the reciprocity relations and lts [ 43 ] . a connection between the celebrated hellmann - feynman theorem , the reciprocity relations , and lts for the rfi has been established in [ 44 ] , in addition with a unique inference procedure to obtain the energy eigenvalue without recourse to solving the time - independent schrdinger - like equation .these prior works differ from the analysis presented in this paper in two significant aspects - they treat the scalar case and the prior knowledge encoded in the observed data are introduced as constraints into the variational extremization procedure in the form of expectations of the powers of the scalar independent variable .the reciprocity relations and the lts for the time - independent schrdinger - like equation derived in this paper , despite possessing a vector form and least squares constraints , mathematically resemble those derived in [ 35 ] .this augurs well with regards to the possibility of translating the entire mathematical structure of thermodynamics into the fisher - based model presented in this paper .the distinctions in the reciprocity relations and lts derived in this paper vis - - vis earlier referenced works [ 36 - 39 ] result in the information theoretic relations derived from these relations being qualitatively different from those obtained in the scalar case .this fact evidences the distinction between the results presented in this paper and those demonstrated in refs .[ 36 - 39 ] , based on physics and on systems theoretic [ 45 ] considerations . _ of interest is an expression that infers the fim of the modeling phase just in terms the observed data , hereafter referred to as the empirical fim_. such relation , which is a solution of a linear pde derived from the reciprocity relations together with the lts that infers the fim without recourse to the time - independent schrdinger - like equation , has no equivalent in the maxent model. the goals of this paper are * to provide an overview of the solution procedure .this is done in section 2 , * to : introduce the mfi principle in a vector setting and using least square constraints , derive a systematic procedure for the inference of exponential pdf s of the modeling phase with the aid of the qm virial theorem , and obtain the constraint driven pseudo - inverse condition that yields the estimate of the coefficients comprising the _ working hypothesis _ ( see section 2 of this paper ) by invoking the qm virial theorem .this three - fold objective is performed in section 3 .note that for normal pdf s the solutions of the mfi and maxent principles are known to coincide [ 24 , 46 ] .this paper focuses on the normal distribution to demonstrate that the results of the maxent model can be derived from qm considerations and interpreted within the framework of estimation theory , which is not possible within the ambit of the maxent framework , * to derive the reciprocity relations and the lts for the fim in a vector setting using square constraints , analyzing the concomitant information theoretic relations .the _ empircal fim _ is derived , and a preliminary analysis of its properties is performed .this is accomplished in section 4 , * to computationally demonstrate the efficacy of the prediction framework for the mackey - glass ( m - g , hereafter ) delay differential equation ( dde , hereafter ) [ 47 ] , for a 5 minute electrocardiogram ( ecg , hereafter ) segment of record 207 of the mit - beth israel deaconess hospital ( mit - bih , hereafter ) arrythmia database [ 48 ] ( considered to be one of the most challenging records in the mit - bih arrhythmia database ) for the modified lead ii ( mlii , hereafter ) , and for the single ecg signal in record cudb / cu02 of the around 8.5 minute creighton university ventricular tachyarrhythmia ( vta , hereafter ) database [ 49 ] .the ecg data are obtained from the physionet online repository [ 50 ] .this is demonstrated in section 5 of this paper .the leitmotif of this exercise is as follows .+ + an obvious practical advantage of the pseudo - inverse model presented in this paper over a least squares approach in ordinary euclidian space is that the former requires the moore - penrose pseudo - inverse [ 51 ] of the embedding matrix ( defined in section 2 of this paper ) and therefore , the estimate of the coefficients of the ansatz comprising the ( see sections 2 and 3 of this paper ) derived via inference from the training data .this can be achieved even when * w * is nearly singular .the fact that the estimates are defined even when * w * is singular ( or nearly singular ) can in principle result in very volatile forecasts , on account of ill - conditioning .note that ill - conditioning occur in the presence of a near - singular * w * , which in turn might occur if many lags of the observed data are present .+ + the leitmotif for the choice of the benchmarks on which to test the prediction model is as follows .the m - g equation with delay has a high embedding dimension [ 18 ] .thus * w * displays more lags as compared to most prominent models describing low dimensional chaos [ 3 , 14 ] . as is described in section 2 of this paper , the rationale being that the number of lags in * w * depends upon the embedding dimension .as evidenced in section 5 of this paper , the forecast of the m - g dde is stable and accurate .next , ecg s of patients suffering from serious cardiac related ailments possess artifacts which are representative of various conditions of a diseased heart .these artifacts are noted in the reference annotations as episodes ( transients ) . it is demonstrated that even for the most challenging cases , the model presented in this paper accurately forecasts these episodes without any signs of volatility , thereby demonstrating the accuracy and robustness of the pseudo - inverse model .this is established for cases where the original signal possesses highly erratic / volatile behavior .numerical examples for exemplary cases are provided . to the best of the authors knowledge , these objectives have never hitherto been accomplished .given a signal * x * from an unknown dynamical system , the corresponding time series consists of a sequence of _ stroboscopic _ measurements : made at intervals .the state space is reconstructed using the time delay embedding [ 1 , 5 , 6 ] , which uses a collection of coordinates with time lag to create a vector in -dimensions , on a system considered to be in a state described by at discrete times where is the time lag , and is the embedding dimension of the reconstruction .it is known from takens theorem ( eg . see refs . [ 5 , 6 ] ) that for flows evolving to compact attracting manifolds of dimension ; if for the forecasting time , ( time samples in this paper ) , there exists a functional form of the type where ,\ ] ] and .a _ non - unique _ansatz for the mapping function of this form ( employing the einstein summation convention ) is specified as [ 9 ] where and is the polynomial degree chosen to expand the mapping .the number of parameters in ( 4 ) corresponding to terms ( the degree ) , is the combination with repetitions the length of the vector of parameters , is other forms of ansatz are encountered in [ 52 ] .it is important to note that specifying an ansatz of a form , such as that defined in ( 4 ) , has its roots in signal processing [ 53 ] . as an information recovery criterion, the vector of coefficients is obtained via inference by invoking the mfi principle .the objective is to achieve a model possessing high predictive ability .computations are made on the basis of the information given by points of the time series .these constitute the _ training data _ obtained from the observed signal , whose utility is to infer the coefficients . ; n=1, ... ,m.\ ] ] given the data set ( 7 ) , the parametric mapping ( 2 ) can be re - stated as here , ( 7 ) can be expressed in vector - matrix form as where and is a rectangular matrix with dimensions , and whose row is: ] .it is assumed that the probability associated with is .note that is assumed to be a continuous random variable .alternately , may be defined as the _ empirical distributions _ of the observations [ 54 ] .the fim is extremized subject to the constraints and the normalization condition note that , where is the number of parameters of the model .also denotes the expectation evaluated with respect to .section 3 derives the constraint driven pseudo - inverse condition for normal distributions by invoking the qm virial theorem as where : is the moore - penrose _ pseudo - inverse _ [ 51 ] .note that as stated in sections 1 and 3 , unlike the maxent model the fim - based framework presented herein also allows for described by hermite - gauss solutions .such extensions of the present model and the subsequent effects on the pseudo - inverse condition are beyond the scope of this paper , and will be presented elsewhere . the _ prediction phase _ commences once the pertinent parameters are determined from the training data in the _modeling phase_. these are employed to predict _ new _ series values where is a matrix of dimension .note that is such that _ new _ time series values may be evaluated after the training data has been reconstructed .the prediction phase is essentially the implementation of ( 10 ) , for temporal indices , where , is the sum of both the training data and the _ new _ data to be predicted _ after _ completion of the modeling phase . _it is important to note that the process of inference necessitates the re - definition of the _ working hypothesis _ to account for ( 10 ) now superseding ( 9 ) .the obvious reason being that the process of inference can only evaluate and not . the value of should be suitably bounded to facilitate the comparison between the predicted signal obtained from the solution of ( 13 ) , with the original signal .this is done in order to judge the fidelity of the prediction through _ both _ visual inspection and analysis ; viz .calculation of the mean squared error ( mse , hereafter ) between the original and the predicted signal . in this paper ,given the original signal represented by the column vector , - \max \left\ { { t , d } \right\} ] secs . to obtain , for .these results and the concomitant mse values are depicted in figs . 1 and 2 , respectively . here ,1 clearly demonstrates that the predicted results faithfully capture the dynamics embedded in the chaotic m - g time series. fig . 2 expectedly demonstrates a slight distortion of the predicted signal vis - - vis the original signal , as a consequence of long - term forecasting .it may be argued that the number of coefficients is high and can forecast just about any signal .this argument is not only tenuous at best for the case of chaotic signals , but is also orthogonal to the very reason causing the choosing of such a high value of .specifically , section 1 explicitly discusses the possible singularity ( or near - singularity ) of the embedding matrix * w*. as stated therein , large number of lags , * w * can result in volatile forecasts owing to ill - conditioning .the m - g dde in this example has a higher embedding dimension than other prominent models ( such as the lorenz , hnon , etc . )( see , for eg .[ 3 ] ) , and thus the resulting * w * would be more prone to result in volatile forecasts . as is evidenced by figs . 1 and 2, this is not the case and the forecasts are clearly accurate and stable .it is noteworthy to mention that the coefficients obtained from the training data during the modeling phase , which form the basis on which further prediction is done over a much larger time period and data sample size ( as compared to those in the modeling phase ) , are unique to the specific data set under consideration .specifically , coefficients obtained from different data sets , for example the m - g dde with a different value of the lag or another nonlinear dynamics model , yield erroneous predictions if applied to a data set which differs from the one(s ) they were obtained from .this issue is the task on ongoing studies briefly described in section 6 within the context of the results described in sections 3 and 4 , and will be presented elsewhere .this sub - section employs a 300 secs .ecg signal to demonstrate that the model described in this paper accurately predicts episodes ( transients ) which are the artifacts of a diseased heart over a reasonable period of time , even for a highly erractic / volatile signal .the annotations are described in [ 62 ] .the signal is extracted from data obtained as a .mat data file from [ 63 ] .the sampling frequency of the data [ 64 ] , the number of samples being for a total duration of .the rationale for the choice of 300 secs .sample is to ensure that the portion of the signal , both during the modeling phase and the prediction phase that follows , possess sufficiently identifiable episodes which are documented in the reference annotations [ 65 ]. it would be desirable to conduct the study over the entire duration of the signal spanning around 30 mins . however , this would yield simulation results that are visually incoherent , and hence the truncation of the signal length / duration .the number of training samples from which the values of is obtained is ] secs .are demonstrated in figs .4(b)-(d ) and 6(b)-(d ) for and , respectively .on inspection of figs .4(b ) and 6(b ) , the instance of ventricular tachycardia identified by `` + '' and defined by `` ( vt '' in the reference annotations at 38.522 secs . ,immediately followed by three instances of premature ventricular contraction identified by `` v '' and the onset of ventricular flutter / fibrillation identified by `` [ '' at 40.736 secs . in the reference annotation followed by an instance of ventricular flutter identified by `` + '' and defined by `` ( vfl '' in the reference annotations at 40.803 secs . and the subsequent termination of the ventricular flutter / fibrillation identified by `` ] '' at 50.972 secs . in the reference annotation can be easily identified .this region is of particular importance since it spans _ both _ the modeling phase from which the _ working hypothesis _is determined from the training data , _ and _ the prediction of new data values . on inspection of figs .4(c ) and 6(c ) , the onset of ventricular flutter / fibrillation identified by `` [ '' at 54.764 secs . in the reference annotation followed by an instance of ventricular flutter identified by `` + '' and defined by `` ( vfl '' in the reference annotations at 54.869 secs . and the subsequent termination of the ventricular flutter / fibrillation identified by `` ] '' at 50.972 secs . in the reference annotation and the instance of ventricular tachycardia identified by `` + '' and defined by `` ( vt '' in the reference annotations at 61.839 secs .( 1:01.839 mins . ) , immediately followed by three instances of premature ventricular contraction identified by `` v '' , can be easily identified . finally , on inspection of figs .4(d ) and 6(d ) , the onset of ventricular flutter / fibrillation identified by `` [ '' at 269.467 secs .( 4:29.467 mins . ) in the reference annotation followed by an instance of ventricular flutter identified by `` + '' and defined by `` ( vfl '' in the reference annotations at 129.586 secs .( 4:29.586 mins . ) and the subsequent termination of the ventricular flutter / fibrillation identified by `` ] '' at 240.906 secs .( 4:40.906 mins . ) in the reference annotation , can be easily identified . in all cases depicted in figs .( 4 ) and ( 6 ) , it is observed that the quality of the prediction is high , with the case of the example with being marginally degraded vis - - vis the case with , which is expected .this sub - section demonstrates the robustness of model described in this paper for an extended ecg signal , even for a highly erractic / volatile signal displaying the symptoms of cardiac vta .the signal is extracted from data obtained as a .mat data file from [ 66 ] .the sampling frequency of the data , and the number of samples is , for a total duration of [ 67 ] . in order to study the robustness of the prediction performance of the fim - based model , the number of training samples from which the values of is obtained is chosen to be ] , each element is defined by .\ ] ] for * a * possessing _ a - priori _ iid entries substituting ( a.2 ) into ( a.1 ) yields \ ] ] for since all other integrals integrate to unity because of normalization . for , thus , $ ] is a diagonal matrix with each element defined by ( a.5 ) .note that , as used in eq .( 16 ) .eckman , d. ruelle , rev .phys . * 15 * ( 1985 ) 617 - 656 .h. d. i. abarbanel , r. brown , j. j. sidorowich , l. sh .ysimring , rev .* 65 * ( 1993 )1331 - 1392 . h. kantz , t. schreiber , _ nonlinear time series analysis _ , cambridge univ .press , cambridge u.k , 1999 .j. rissanen , ann . stat . * 14 * ( 1989 ) 1080 - 1100 .f. takens , `` detecting strange attractors in turbulence '' , in _ dynamical systems and turbulence _ : lecture notes in mathematics , volume 898 , springer , berlin , 1981 , 366 - 381 .t. sauer , j. a. yorke , m. casdagli , j. stat .* 65 * ( 2000 ) 579 .l. diambra , a. plastino , phys .lett . a * 216 * ( 1996 ) 278 - 282 .m. t. martn , a. plastino , v. vampa , g. judge , physica a * 405 * ( 2014 ) 63 - 69 .m. t. martn , a. plastino , v. vampa , entropy * 16 * ( 2014 ) 4603 - 4611 .a. f. bariviera , m. t. martn , a. plastino , v. vampa , physica a * 449 * ( 2016 ) 401 - 407 .e. t. jaynes , phys . rev .* 106 * ( 1957 ) 620 - 630 .j. c. principe , _ information theoretic learning - renyi s entropy and kernel perspectives _ , springer , new york , 2010 .w. liu , j. c. principe , s. haykin , _ kernel adaptive filtering : a comprehensive introduction _ , wiley , hoboken nj , 2010 .h. d. i. abarbanel , _ analysis of observed chaotic data _ , springer , new york , 1996 .l. diambra , physica a * 278 * 2000 ) 140 - 149 . n. wiener ._ extrapolation , interpolation , and smoothening of stationary time series with engineering applications _ , wiley , new york , 1949 .m. casdagli , physica d * 35 * ( 1989 ) 335 - 356 .j. c. principe , a. rathie , j .-kuo , intl .j. of bifurcation and chaos * 2 * 1992 989 - 996 .d. mandic , j. chambers , _ recurrent neural networks for prediction _ , wiley , chichester , 2001 .l. diambra , a. plastino , phys .e * 52 * ( 1995 4557 - 4560. n. i. sapankevych , r. sankar , ieee comput .intell . mag ., * 4 * ( 2009 ) 24 - 38 .j. p. crutchfield , b. s. mcnamara , complex systems , * 21 * ( 1985 ) 417 .j. d. farmer , j. j. sidorowich , phys .* 59 * ( 1987 ) 845 .b. r. frieden , _ science from fisher information - a unification _ , cambridge university press cambridge , 2004 . j.a .wheeler , in zurek w. h. ( ed . ) : _ complexity , entropy and the physics of information _ , addison wesley , new york , 3 - 28 , 1991 .j. hber , _ robust statistics _ , wiley , newy york , 1981 .b. r.frieden , phys .a , * 41 * ( 2000 ) 4265 - 4276 ; optics lett , * 14 * ( 1989 ) 199 - 201 . b. r. frieden , a. plastino , a. r. plastino , b. h. soffer , phys . rev .e * 66 * ( 2002 ) 046128 .frieden , a. plastino , a. r. plastino , b. h. soffer , phys .lett . a * 304 * ( 2002 ) 73 - 78 .j. s. dehesa , .g. martn , p. snchez - moreno , complex anal . andoper . th .* 6 * ( 2012 ) 585 - 601 .w. greiner , _ quantum mechanics .an introduction _ , springer , berlin , 2012 . f. n. fernandez and e. a. castro , _ hypervirial theorems _ ,lecture notes in chemistry , vol .43 , springer - verlag , berlin , 1987 .r. c. venkatesan , , , 487 , ieee press , piscataway , nj , 2007 .r. c. venkatesan , `` encryption of covert information through a fisher game '' , in _ exploratory data analysis using fisher information _ , frieden , b.r . and gatenby , r.a . ,( eds . ) , springer- verlag , london , 181 - 216 , 2006 .b. r.frieden , a. plastino , a. r. plastino , b. h. soffer , phys .e , * 60 * ( 1999 ) 046128 .s. p. flego , a. plastino , a. r. plastino , ann .* 326 * ( 2011 ) 2533 - 2543 .s. p. flego , a. plastino , a. r. plastino , physica a * 390 * ( 2011 ) 2276 - 2282 .s. p. flego , a. plastino , a. r. plastino , physica a * 390 * ( 2011 ) 4702 - 4712 .s. p. flego , a. plastino , a. r. plastino , physica scripta , * 85 * ( 2012 ) 055002 - 055008 .c. villani , _ topics in optimal transportation _ , graduate studies in mathematics vol.*58 * , american mathematical society , 2000. g. blower , _ random matrices : high dimensional phenomena _ , london mathematical society lecture notes , cambridge university press , cambridge , 2009 .zegers , entropy * 17 * ( 2015 ) 4918 - 4939 ; b. r. frieden , b. h. soffer , phys . lett .a * 374 * ( 2010 ) 3895 - 3898 .r. c. venkatesan , a. plastino , phys .a , * 378 * ( 2014 ) 1341 - 1345 .r. c. venkatesan , a. plastino , ann .* 359 * ( 2015 ) 300 - 316. s. m. kay , _ fundamentals of statistical signal processing , vol i : estimation theory _, prentice - hall signal processing series , 1993 . m. casas , f. pennini , a. plastino , phys . lett . a * 235 * ( 1997 ) 457 - 463 .m.c.mackey , l.glass , science * 197 * ( 1977 ) 287 - 289 . g. b. moody , r. g. mark , ieee eng . in med and biol . *20 * ( 2001 ) 45 - 50 .f. m. nolle , f. k. badura , j. m. catlett , r. w. bowser , m. h. sketch , `` crei - gard , a new concept in computerized arrhythmia monitoring systems '' , _ computers in cardiology 1986 _ * 13 * , 515 - 518 , ieee press , piscataway , nj , 1986 .a. l. goldberger , l. a. n. amaral , l. glass , j. m. hausdorff , p. ch .ivanov , r. g. mark , j. e. mietus , g. b. moody , c .- k .peng , h. e. stanley , circulation , * 01*(2000 e215-e220 .g. h. golub , c. l. van loan , _ matrix computations _, third ed . , johns hopkins univ . press , baltimore , 1995 .l. diambra , c. p. malta , phys .e * 57 * ( 1999 ) 929 - 937 .b. pompe , `` mutual information and relevant variables for predictions '' , in _ modeling and forecasting financial data techniques for nonlinear dynamics _ , soofi , a. s. and cao , l. , ( eds . ) , springer , new york , pp .61 - 92 , 2002 .a. b. owen , _ empirical likelihood _ ,chapman & hall / crc , boca raton , 2001 .d. guo , s. shamai ( shitz ) and s. verd , _ the interplay between information and estimation measures _ , foundations and trends in signal processing ser ., now publishers , boston , 2012 . m. casas , a. plastino , a. puente , phys .a * 248 * ( 1998 ) 161 - 166 .m. kennel , r. brown , h. d. i. abarbanel , phys .rev . a;*45 * ( 1992 ) 3403 - 3411 .p. i. grassberger , i. procaccia , physica d , * 9 * ( 1983 ) 189 - 208 . j. d. farmer , physica d , * 4 * ( 1982)366 - 393 .r. b. govindan , k. narayanan , m. s. gopinathan , chaos , * 8 * ( 1998 ) 495 - 502 .a. casaleggio , s. braiotta,``study of the lyapunov exponents of ecg signals from mit - bih database '' , _ computers in cardiology 1995 _ , 697 - 700 .ieee press , piscataway , nj , 1995 ; a. casaleggio , s.braiotta , chaos sol .* 9 * ( 1997 ) 1591 - 1599 . | a robust prediction model invoking the takens embedding theorem , whose _ working hypothesis _ is obtained via an inference procedure based on the minimum fisher information principle , is presented . the coefficients of the ansatz , central to the _ working hypothesis _ satisfy a time independent schrdinger - like equation in a vector setting . the inference of i ) the probability density function of the coefficients of the _ working hypothesis _ and ii ) the establishing of constraint driven pseudo - inverse condition for the modeling phase of the prediction scheme , is made , for the case of normal distributions , with the aid of the quantum mechanical virial theorem . the well - known reciprocity relations and the associated legendre transform structure for the fisher information measure ( fim , hereafter)-based model in a vector setting ( with least square constraints ) are self - consistently derived . these relations are demonstrated to yield an intriguing form of the fim for the modeling phase , which defines the _ working hypothesis _ , solely in terms of the observed data . cases for prediction employing time series obtained from the : the mackey - glass delay - differential equation , one ecg sample from the mit - beth israel deaconess hospital ( mit - bih ) cardiac arrhythmia database , and one ecg from the creighton university ventricular tachyarrhythmia database . the ecg samples were obtained from the physionet online repository . these examples demonstrate the efficiency of the prediction model . numerical examples for exemplary cases are provided . fisher information , time series prediction , working hypothesis inference , minimum fisher information , takens theorem , generalized vector fisher - euler theorem , legendre transform structure , mackey - glass equation , ecg s . pacs : 05.20.-y ; 2.50.tt ; 0.3.65.-w ; 05.45.tp |
in population genetics , one way to explain disparity is to observe how many genes appear only once in the sample .a gene carried by a single individual is the result of two possible events : either the gene comes from a mutation that appeared in an external branch of the genealogical tree , either this gene is of the ancestral type and mutations occured in the rest of the sample ( see figure [ fig : graph1 ] ) . we suppose that events of the second type occur in a much less frequent way than events of the first type ( it is indeed the case when the size of the sample goes big ) .the total number of genes carried by a single individual is then closely related to the so - called total external length , which is the sum of all external branch lengths of the tree .the bolthausen - sznitman coalescent ( see for instance ) is a well - known example of exchangeable coalescents with multiple collisions ( see for a proper definition of this type of coalescents ) .it was first introduced in physics , in order to study spin glasses but it has also been thought as a limiting genealogical model for evolving populations with selective killing at each generation , see for instance .recently , berestycki et al . in noted that this coalescent represents the genealogies of the branching brownian motion with absorption .the bolthausen - sznitman coalescent , is a continuous time markov chain with values in the set of partitions of , starting with an infinite number of blocks / individuals . in order to give a formal description of the bolthausen - sznitman coalescent , it is sufficient to give its jump rates .let , then the restriction of to =\{1,\dots , n\} ] , with the following dynamics : whenever is a partition consisting of blocks , any particular of them merge into one block at rate so the next coalescence event occurs at rate note that mergers of several blocks into a single block is possible , but multiple mergers do not occur simultaneously .moreover , this coalescent process is exchangeable , i.e. its law does not change under the effect of a random permutation of labels of its blocks .one of our aims is to study the total external length of the bolthausen - sznitman coalescent .more precisely , we determine the asymptotic behaviour of the total external length of the bolthausen - sznitman coalescent restricted to , when goes to infinity , and relate it to its total length ( the sum of lengths of all external and internal branches ) . a first orientation can be gained from coalescents without proper frequencies .for this class mhle proved that after a suitable scaling the asymptotic distributions of and are equal .now the bolthausen - sznitman coalescent does not belong to this class , but it is ( loosely speaking ) located at the borderline . also it is known for the bolthausen - sznitman coalescent that {d}z,\ ] ] where {d} ] denotes convergence in probability .[ th : cvi ] for the total internal length of the bolthausen - sznitman coalescent , we have {\mathbb{p } } 1.\ ] ] now noting that and using and our main result , we deduce the asymptotic distribution of the total external length .[ cor : cve ] for the total external length of the bolthausen - sznitman coalescent , we have {d}z-1.\ ] ] observe that the bolthausen - sznitman coalescent can be seen as a special case ( ) of the so - called -coalescent which class is defined for .work shows that in the case the variable converges in law to a random variable defined in terms of a driftless subordinator depending on . for , we refer to where it is proven that converges weakly to a stable r.v . of index , being a constant also depending on ( see also ) . in kingmans case ( ) a logarithmic correction appears and the limit law is normal ( see ) .the remainder of the paper is structured as follows . in section 2 ,we prove our main results using a coupling method which was introduced in that provides more information of the chain .finally , section [ sec : mut ] is devoted to the asymptotic behaviour of the number of mutations appearing in external and internal branches of the bolthausen - sznitman coalescent .in this section , we use the coupling method introduced in in order to study the number of jumps .let be a sequence of i.i.d .random variables with distribution note that let .it is well - known , see for instance , that {d}z,\ ] ] where is the stable random variable that appears in .we have the following functional limit result , with a limit , which is certainly a lvy process .[ lem : fltv ] the process defined by converges weakly in the skorohod space ] . from ( [ eq : ratio succ ] ) and again the strong markov property at , we get from the above behaviour , ( [ eq : hetaetazeta ] ) and the induction hypothesis , we have for ] , from ( [ eq : recu1 ] ) and the strong markov property , we get we know from the induction hypothesis that , as goes to .we then obtain , from ( [ eq : hetaetazeta ] ) and using again the induction hypothesis , for all ] , as ., asymptotic results for coalescent processes without proper frequencies and applications to the two - parameter poisson - dirichlet coalescent , _ stochastic process ._ * 120 * ( 2010 ) , no .11 , 21592173 . | in this paper , we study a weak law of large numbers for the total internal length of the bolthausen - szmitman coalescent . as a consequence , we obtain the weak limit law of the centered and rescaled total external length . the latter extends results obtained by dhersin mhle . an application to population genetics dealing with the total number of mutations in the genealogical tree is also given . |
a key factor in the efficiency of a peer - to - peer overlay network is the level of collaboration provided by each peer .this paper takes a first step towards quantifying the level of collaboration that can be expected from each participant , by proposing a model to evaluate the cost each peer incurs for being a part of the overlay .such a cost model has several useful applications , among which , ( 1 ) providing a benchmark that can be used to compare between different proposals , complementary to recent works comparing topological properties of various overlays , ( 2 ) allowing for predicting disincentives , and designing mechanisms that ensure a protocol is _ strategyproof _ , and ( 3 ) facilitating the design of load balancing primitives .this work is not the first attempt to characterize the cost of participating in a network .jackson and wolinsky proposed cost models to analyze formation strategies in social and economic networks .more recent studies model ( overlay ) network formation as a non - cooperative game .these studies assume that each node has the freedom to choose which links it maintains , whereas we assume that the overlay topology is constrained by a protocol. moreover , our approach extends previously proposed cost models , by considering the load imposed on each node in addition to the distance to other nodes and degree of connectivity . in the remainder of this paper, we introduce our proposed cost model , before applying it to several routing geometries used in recently proposed distributed hash tables ( dht ) algorithms .we conclude by discussing some open problems this research has uncovered .the model we propose applies to any peer - to - peer network where nodes request and serve items , or serve requests between other nodes .this includes peer - to - peer file - sharing systems , ad - hoc networks , peer - to - peer lookup services , peer - to - peer streaming systems , or application - layer multicast overlays , to name a few examples . to simplify the presentation, we assume a dht - like structure , defined by quadruplet , where is the set of vertices in the network , is the set of edges , is the set of keys ( items ) in the network , and is the hash function that assigns keys to vertices .we denote by the set of keys stored at node .we have , and we assume , without loss of generality , that the sets are disjoint .we characterize each request with two independent random variables , and , which denote the node making the request , and the key being requested , respectively .consider a given node .every time a key is requested in the entire network , node is in one of four situations : 1 .node does not hold or request , and is not on the routing path of the request .node is not subject to any cost .2 . node holds key , and pays a price for serving the request .we define the _ service cost _ incurred by , as the expected value of over all possible requests .that is , \ .\ ] ] 3 .node requests key , and pays a price to look up and retrieve .we model this price as , where is the number of hops between and the node that holds the key , and is a ( positive ) proportional factor .we define the _ access cost _ suffered by node , , as the sum of the individual costs multiplied by the probability key is requested , that is , \ , \label{eq : access}\ ] ] with if there is no path from node to node , and for any .4 . does not hold or request , but has to forward the request for , thereby paying a price .the overall _ routing cost _ experienced by node is the average over all possible keys , of the values of such that is on the path of the request .that is , we consider the binary function and express as \pr[y = k ] \chi_{j , l}(i)\ .\label{eq : routing}\ ] ] in addition , each node keeps some state information so that the protocol governing the dht operates correctly . in most dht algorithms ,each node maintains a neighborhood table , which grows linearly with the out - degree of the node , resulting in a _ maintenance cost _ given by where denotes the cost of keeping a single entry in the neighborhood table of node .last , the _ total cost _ imposed on node is given by which can be used to compute the total cost of the network , .the topology that minimizes , or `` social optimum , '' is generally not trivial . in particular ,the social optimum is the full mesh only if for all , and the empty set only if for all .we next apply the proposed cost model to a few selected routing geometries . we define a routing geometry as in , that is , as a collection of edges , or topology , associated with a route selection mechanism . unless otherwise noted , we assume shortest path routing , and distinguish between different topologies . we derive the various costs experienced by a node in each geometry , before illustrating the results with numerical examples .we consider a network of nodes , and , for simplicity , assume that , for all and , , , , and . for the analysis in this section , we also assume that each node holds the same number of keys , and that all keys have the same popularity . as a result , for all , = \frac{1}{n } \ , \ ] ] which implies regardless of the geometry considered .we also assume that requests are uniformly distributed over the set of nodes , that is , for any node , = \frac{1}{n } \ .\ ] ] last , we assume that no node is acting maliciously . [ [ star - network ] ] star network + + + + + + + + + + + + the star frequently appears as an equilibrium in network formation studies using cost models based on graph connectivity .we use to denote the center of the star , which routes all traffic between peripheral nodes .that is , for any ( , ) . substituting in eqn .( [ eq : routing ] ) , we get the center node is located at a distance of one hop from all other nodes , thus in addition , , which implies that the cost incurred by the center of the star , , is peripheral nodes do not route any traffic , i.e. , for all , and are located at a distance of one from the center of the star , and at a distance of two from the other nodes , giving furthermore , for all peripheral nodes .thus , , and the total cost imposed on nodes is the difference quantifies the ( dis)incentive to be in the center of the star . as expressed in the following two theorems , there is a ( dis)incentive to be in the center of the star in a vast majority of cases . if the number of nodes ( ) is variable , unless .[ theo : star - asymmetry - x1 ] assume that . because , is equivalent to . using the expressions for and given in eqs .( [ eq : c0-star ] ) and ( [ eq : ci - star ] ) , and rewriting the condition as a polynomial in , we obtain we can factor the above by , and obtain a polynomial in is constantly equal to zero if and only if all of the polynomial coefficients are equal to zero .thus , eqn .( [ eq : polynomial ] ) holds for _ any _ value of if and only if : the solutions of the above system of equations are .hence , for any only when nodes only pay an ( arbitrary ) price for serving data , while state maintenance , traffic forwarding , and key lookup and retrieval come for free . if the number of nodes ( ) is held fixed , and at least one of , , or is different from zero , only if or , where is a positive integer that must satisfy : additionally , for any if and .[ theo : star - asymmetry - x2 ] recall from the proof of theorem [ theo : star - asymmetry - x1 ] , that is equivalent to eqn .( [ eq : polynomial ] ) . clearly , setting satisfies eqn .( [ eq : polynomial ] ) for all values of , , and .assuming now that , to have , we need to have since at least one of , , or is not equal to zero , eqn .( [ eq : degree2 ] ) has at most two real solutions .we distinguish between all possible cases for , , and such that at least one of , , and is different from zero . * if , and , eqn .( [ eq : degree2 ] ) reduces to , which implies , thereby contradicting the hypothesis that at least one of , , and is different from zero .therefore , eqn .( [ eq : degree2 ] ) does not admit any solution , i.e. , there is a ( dis)incentive to be in the center of the star regardless of . * if and , the only solution to eqn .( [ eq : degree2 ] ) is note that if , which is not feasible .( the number of nodes has to be positive . ) * if , then eqn . ( [ eq : degree2 ] ) admits two real roots ( or a double root if ) , given by however , because , and , so that the only potentially feasible is given by combining eqs .( [ eq : degree1-sol ] ) and eqs .( [ eq : degree2-sol ] ) yields the expression for given in eqn .( [ eq : n0 ] ) .note that the expression given in eqn .( [ eq : n0 ] ) is only a necessary condition .in addition , has to be an integer so that we can set the number of nodes to .[ [ de - bruijn - graphs ] ] de bruijn graphs + + + + + + + + + + + + + + + + de bruijn graphs are used in algorithms such as koorde , distance - halving , or odri , and are extensively discussed in . in a de bruijn graph , any node is represented by an identifier string of symbols taken from an alphabet of size .the node represented by links to each node represented by for all possible values of in the alphabet .the resulting directed graph has a fixed out - degree , and a diameter .denote by the set of nodes such that the identifier of each node in is of the form .nodes in link to themselves , so that for . for nodes , the maintenance cost is .the next two lemmas will allow us to show that the routing cost at each node also depends on the position of the node in the graph . with shortest - path routing , nodes not route any traffic , and .[ lemma : lowerbnd - deb - routes ] ( by contradiction . )consider a node with identifier , and suppose routes traffic from a node to a node .the nodes linking to are all the nodes with an identifier of the form , for all values of in the alphabet .the nodes linked from are all the nodes of the form for all values of in the alphabet .therefore , there exists and such that traffic from node to node follows a path . because , in a de bruijn graph , there is an edge between and , traffic using the path between and does not follows the shortest path .we arrive to a contradiction , which proves that does not route any traffic .the number of routes passing through a given node is bounded by with the bound is tight , since it can be reached when for the node .[ lemma : upperbnd - deb - routes ] the proof follows the spirit of the proof used in to bound the maximum number of routes passing through a given edge . in a de bruijn graph , by construction, each node maps to an identifier string of length , and each path of length hops maps to a string of length , where each substring of consecutive symbols corresponds to a different hop .thus , determining an upper bound on the number of paths of length that pass through a given node is equivalent to computing the maximum number , , of strings of length that include node s identifier , , as a substring . in each string of length corresponding to a paths including , where is neither the source nor the destination of the path , the substring can start at one of positions .there are possible choices for each of the symbols in the string of length that are not part of the substring . as a result , with shortest path routing , the set of all paths going through node include all paths of length with ] . because is a binary function , = e[\chi_{i , j}] ] and ] .we have = \sum_{k=1}^{d } k \pr[t_{i , j } = k ] \ , \ ] ] which , using the expression for $ ] given in eqn .( [ eq : tij - lpm ] ) , implies = \sum_{k=0}^{d } k \frac{{d\choose k}(\delta - 1)^k}{n } \ , \ ] ] and can be expressed in terms of the derivative of a classical series : = \frac{\delta-1}{n}\frac{\partial}{\partial\delta}\left(\sum_{k=0}^{d } { d\choose k}(\delta - 1)^k\right)\ .\ ] ] using the binomial theorem , the series on the right - hand side collapses to , which yields = \frac{\delta-1}{n}\frac{\partial(\delta^d)}{\partial\delta } \ .\ ] ] we compute the partial derivative , and obtain = \frac{d\delta^{d-1}(\delta-1)}{n } \ .\nonumber\ ] ] multiplying by to obtain , we eventually get , for all , which can be simplified , using : [ [ chord - rings ] ] chord rings + + + + + + + + + + + in a chord ring , nodes are represented using a binary string ( i.e. , ) .when the ring is fully populated , each node is connected to a set of neighbors , with identifiers for .an analysis identical to the above yields and as in eqs .( [ eq : routing - lpm ] ) and ( [ eq : access - lpm ] ) for .note that eqn .( [ eq : access - lpm ] ) with is confirmed by experimental measurements . .[ tab : debruijn ] asymmetry in costs in a de bruijn graph [ cols="^,^,^,^,^,^,^",options="header " , ] we illustrate our analysis with a few numerical results . in table[ tab : debruijn ] , we consider five de bruijn graphs with different values for and , and and i.i.d .uniform random variables .table [ tab : debruijn ] shows that while the access costs of all nodes are comparable , the ratio between and the second best case routing cost , over all nodes but the nodes in for which . ] , is in general significant .thus , if , there can be an incentive for the nodes with to defect .for instance , these nodes may leave the network and immediately come back , hoping to be assigned a different identifier and incurring a lower cost .additional mechanisms , such as enforcing a cost of entry to the network , may be required to prevent such defections .we graph the access and routing costs for the case , and in figure [ fig : debruijn ] .we plot the access cost of each node in function of the node identifier in figure [ fig : debruijn](a ) , and the routing cost of each node in function of the node identifier in figure [ fig : debruijn](b ) .figure [ fig : debruijn ] further illustrates the asymmetry in costs evidenced in table [ tab : debruijn ] , by exhibiting that different nodes have generally different access and routing costs .therefore , in a de bruijn graph , there is potentially a large number of nodes that can defect , which , in turn , may result in network instability , if defection is characterized by leaving and immediately rejoining the network .next , we provide an illustration by simulation of the costs in the different geometries .we choose , for which the results for plaxton trees and chord rings are identical .we choose for the -dimensional tori , and for the other geometries .we point out that selecting a value for and common to all geometries may inadvertently bias one geometry against another .we emphasize that we only illustrate a specific example here , without making any general comparison between different dht geometries .we vary the number of nodes between and , and , for each value of run ten differently seeded simulations , consisting of 100,000 requests each , with and i.i.d .uniform random variables .we plot the access and routing costs averaged over all nodes and all requests in figure [ fig : all ] .the graphs show that our analysis is validated by simulation , and that the star provides a lower average cost than all the other geometries .in other words , a centralized architecture appears more desirable to the community as a whole than a distributed solution .however , we stress that we do not consider robustness against attack , fault - tolerance , or potential performance bottlenecks , all being factors that pose practical challenges in a centralized approach , nor do we offer a mechanism creating an incentive to be in the center of the star .while the cost model proposed here can be used to quantify the cost incurred by adding links for a higher resiliency to failures , we defer that study to future work .we proposed a model , based on experienced load and node connectivity , for the cost incurred by each peer to participate in a peer - to - peer network .we argue such a cost model is a useful complement to topological performance metrics , in that it allows to predict disincentives to collaborate ( peers refusing to serve requests to reduce their cost ) , discover possible network instabilities ( peers leaving and re - joining in hopes of lowering their cost ) , identify hot spots ( peers with high routing load ) , and characterize the efficiency of a network as a whole .we believe however that this paper raises more questions than it provides answers .first , we only analyzed a handful of dht routing geometries , and even omitted interesting geometries such as the butterfly , or geometries based on the xor metric . applying the proposed cost model to deployed peer - to - peer systems such as gnutella or fasttrackcould yield some insight regarding user behavior .furthermore , for the mathematical analysis , we used strong assumptions such as identical popularity of all items and uniform spatial distribution of all participants . relaxing these assumptionsis necessary to evaluate the performance of a geometry in a realistic setting .also , obtaining a meaningful set of values for the parameters for a given class of applications ( e.g. , file sharing between pcs , ad - hoc routing between energy - constrained sensor motes ) remains an open problem .finally , identifying the minimal amount of knowledge each node should possess to devise a rational strategy , or studying network formation with the proposed cost model are other promising avenues for further research .k. gummadi , r. gummadi , s. gribble , s. ratnasamy , s. shenker , and i. stoica . the impact of dht routing geometry on resilience and proximity . in _ proceedings of acm sigcomm03_ , pages 381394 , karlsruhe , germany , august 2003. m. f. kaashoek and d. karger .koorde : a simple degree - optimal distributed hash table . in _ proceedings of the 2nd international workshop on peer - to - peer systems ( iptps03 ) _ ,pages 323336 , berkeley , ca , february 2003 .d. loguinov , a. kumar , v. rai , and s. ganesh .graph - theoretic analysis of structured peer - to - peer systems : routing distances and fault resilience . in _ proceedings of acm sigcomm03 _ ,pages 395406 , karlsruhe , germany , august 2003 .p. maymounkov and d. mazires .kademlia : a peer - to - peer information system based on the xor metric . in _ proceedings of the 1st international workshop on peer - to - peer systems ( iptps02 ) _ ,pages 5365 , cambridge , ma , february 2002 . c. ng , d. parkes , and m. seltzer .strategyproof computing : systems infrastructures for self - interested parties . in _ proceedings of the 1st workshop on the economics of peer - to - peer systems _ , berkeley , ca , june 2003 . a. rowston and p. druschel .pastry : scalable , decentralized object location and routing for large scale peer - to - peer systems . in _ proceedings of the 18th ifip / acm international conference on distributed systems platform ( middleware01 ) _ , pages 329350 ,heidelberg , germany , november 2001 . | in this paper , we model the cost incurred by each peer participating in a peer - to - peer network . such a cost model allows to gauge potential disincentives for peers to collaborate , and provides a measure of the `` total cost '' of a network , which is a possible benchmark to distinguish between proposals . we characterize the cost imposed on a node as a function of the experienced load and the node connectivity , and show how our model applies to a few proposed routing geometries for distributed hash tables ( dhts ) . we further outline a number of open questions this research has raised . |
information discovery is central to many activities in life , from finding restaurants while attending a conference to making strategic decisions in a big company . in science ,information discovery is , for example , used to stay up - to - date , doing literature research , and it is crucial in the process of scientific research itself . in a worldwhere online information is generated 24 hours a day , 7 days a week , this journey of discovery can easily become a daunting task .we need powerful discovery tools to help us on this journey .general search engines support a low level of information retrieval , sufficient to get a general idea , but when you are looking for technical material in a rich metadata environment , you need specialized digital libraries .the sao / nasa astrophysics data system ( ads ) is such a digital library ( , ) .it has very successfully served the astronomy and physics community for almost 20 years , free of charge . in order to support a richer and more efficient information discovery experience, we created the ads labs environment , in which we expose our users to new search paradigms and tools , to better support our community s research needs . in the followingwe will argue that the ads labs environment will prove to be a useful tool for people involved in science education and outreach .general search engines return so many results that it quickly consumes excessive amounts of time to parse all these results and determine if any of these are relevant in a science education environment . as a result, instructors often return to resources they have been using time and time again , while missing out on a wealth of new material continuously being added through a broad spectrum of resources .a specialized digital library is a highly efficient tool to locate these resources . through its contents and functionality, ads labs will prove to be such tool .in addition to publications relevant for scientific research , the ads repository also contains a set of journals that are directly relevant to people involved in science education .examples of such journals are : _ astronomy education review _ , _ american journal of physics _ ; _ the science teacher _ ; _ journal of science education and technology _ ; _ journal of science teacher education _ ;_ international journal of science education _ ; _ research in science education _ ; _ science & education _ ; _ spark , the aas education newsletter_. a large portion of the astronomical research of the 19th and early 20th centuries was reported in publications written and published by individual observatories .many of these collections were not widely distributed and complete sets of these volumes are now , at best , difficult to locate . this material is often requested by amateur astronomers and researchers , because the observatory report is the only published record of the research and observations .this makes these publications a great resource for classes that have a component dealing with the history of astronomy .we have a large number of historical publications ( mostly from microfilm ) in our repository of scanned literature ( ) .the functionality that makes ads labs such a powerful tool is a combination of being able to specify ahead of time what kind of results you are interested in , and the ability to efficiently filter the results afterwards , using facets .we will illustrate this using an example .imagine you are looking for publications describing or on discussing extrasolar planets in a classroom environment .figure [ screenshot : streamlined ] shows how you could start this search , using the `` streamlined search '' of ads labs . in the search boxwe specify `` extrasolar planets classroom '' , as search scope we select `` all '' ( next to the search button ) , to search the entire ads repository , and we specify `` most relevant '' for sorting .this sorts on a combination of several indicators , including date , position of the query words in the document , position of the author in the author list , citation statistics and usage statistics ( this is how many popular search engines rank ) .this example query generates the results list shown in figure [ screenshot : results ] .the results page consists of a list of publications with a panel of facets on the left , which offer an efficient way of further filtering the results . besides serving as a filter, the author facet summarizes the people active in the field defined by the query .the author facet also shows all the spelling variants by which author names occur in the ads repositories ( by clicking on the author name ) , which can assist in filtering out different authors with the same initials ( where e.g. the first name is spelled out ) .the diagram below the facets shows the number of publications as a function of year , providing a measure for the activity in a field . in this way the facets , besides serving as filters , also provide you with valuable information .the `` data '' facet provides you with an overview of available data products , if available .every entry in the results list has a potential option to look inside the publication .if an abstract is available , the `` matches in abstract '' link will open a snippet showing the matches in the abstract of the query terms .the link `` matches in fulltext '' will show these matches in the full text version of the publication ( which could be the preprint version from arxiv ) .the results page also provides the menu `` more '' ( in the upper right corner ) that contains tools to further explore the results .the `` author network '' allows one to visualize collaborations between authors and further filter the search results by selecting within the nodes in the network .the paper network allows one to visualize the relationships between papers and further filter the search results by selecting within the nodes in the network .if the publications in the results list have astronomical objects identified in them , you can visualize these using the `` sky map '' option .this uses google sky .the sky map can also be used for further filtering .the `` word cloud '' visualizes the most relevant terms found in the list of results and further filter the list by selecting interesting terms from the cloud .the word cloud is only indirectly based on word frequencies : the word frequencies in the astronomy corpus are subtracted , so that relatively frequent words stand out .the `` metrics '' page provides an overview of bibliometric indicators , based on the publications in the results list .this overview can be exported in excel format .the next level is the abstract page ( see figure [ screenshot : abstract ] ) .the principal function of this page is to provide the user with a concentrated description of an article , sufficient for the user to decide whether or not to download and read it .this page provides a view on the basic metadata ( authors , affiliations , title , abstract ) .it also provides links to the full text ( including open access versions ) and opportunities to share the abstract on various social media sites , or save it in an ads private library ( available for users with an ads account , see below ) .when sufficient data is available , a set of `` suggested articles '' is provided , which is generated by a recommender system ( ) .depending on availability , the abstract view provides access to the bibliography of the publication ( those papers cited by the publication , for which there is a record in the ads ) , an overview of publications citing the publication , a list of papers most frequently read in conjunction with this publication ( `` co - reads '' ) and a list of papers that are similar to this publication , based on word similarity ( `` similar articles '' ) .figure [ screenshot : streamlined ] has a area on the right with two tabs ( `` myads articles '' and `` recently viewed articles '' ) .these relate to the existence of user accounts in the ads .the ads offers the option to create a login , providing its users with the possibility to personalize the service .this includes access to a powerful alert service called `` myads '' ( see ) , the creation of `` private libraries '' ( which are essentially baskets in which users can store , and annotate , links to publications ) and a way to specify a `` library link server '' , allowing access to full text using institutional subscriptions .this makes the ads portable , because it provides you with online access to the full text of articles from anywhere in the world .when a user is logged in to their account , visiting the streamlined search will result in displaying the most recent content for their `` daily myads '' service and an overview of their most recently viewed records . as an illustration of finding historical material ,consider the following example : you are interested in the history and use of an instrument called the `` mural circle '' .when you run the query `` mural circle " 1800 - 1850 '' in the streamlined search , you will find about two dozen results .these results show that in this period there were such instruments at the madras observatory , royal greenwich observatory , armagh observatory , u.s .naval observatory and cape observatory .most publications in this results list are available in pdf format .through the publications in its holdings and the user - friendly , intuitive streamlined search , the ads is a useful instrument in the tool box of search engines for professionals involved in science education and outreach .we do realize that this is a group of users with requirements and needs that , in some aspects , differ significantly from those that have traditionally been using the ads .we would love to get feedback and suggestions to help us optimize the search experience for all our users .our users are a big part of our curation efforts , so if you encounter material in our database relevant for science education , but not flagged as such , we would love to hear from you as well .feedback should be sent to ads.harvard.edu .henneken , e. , kurtz , m. j. , eichhorn , g. , et al .2007 , library and information services in astronomy v , 377 , 106 henneken , e. a. , kurtz , m. j. , & accomazzi , a. 2011 , arxiv:1106.5644 henneken , e. a. , kurtz , m. j. , accomazzi , a. , et al .2011 , astrophysics and space science proceedings , 1 , 125 kurtz , m. j. , eichhorn , g. , accomazzi , a. , grant , c. s. , murray , s. s.,watson , j. m. ( 2000 ) .astronomy and astrophysics supplement series 143 , 41 - 59 .thompson , d. m. , accomazzi , a. , eichhorn , g. , et al .2007 , library and information services in astronomy v , 377 , 102 | the sao / nasa astrophysics data system ( ads ) is an open access digital library portal for researchers in astronomy and physics , operated by the smithsonian astrophysical observatory ( sao ) under a nasa grant , successfully serving the professional science community for two decades . currently there are about 55,000 frequent users ( 100 + queries per year ) , and up to 10 million infrequent users per year . access by the general public now accounts for about half of all ads use , demonstrating the vast reach of the content in our databases . the visibility and use of content in the ads can be measured by the fact that there are over 17,000 links from wikipedia pages to ads content , a figure comparable to the number of links that wikipedia has to oclc s worldcat catalog . the ads , through its holdings and innovative techniques available in ads labs ( http://adslabs.org ) , offers an environment for information discovery that is unlike any other service currently available to the astrophysics community . literature discovery and review are important components of science education , aiding the process of preparing for a class , project , or presentation . the ads has been recognized as a rich source of information for the science education community in astronomy , thanks to its collaborations within the astronomy community , publishers and projects like compadre . one element that makes the ads uniquely relevant for the science education community is the availability of powerful tools to explore aspects of the astronomy literature as well as the relationship between topics , people , observations and scientific papers . the other element is the extensive repository of scanned literature , a significant fraction of which consists of historical literature . |
the statistical analysis that combines the results of several independent is known as meta - analysis and it is used in clinical trails and behavioral sciences .consider we have independent normal populations with means and variances .also we have a random samples of sizes , from each one .we denote these samples by , where and are constant .the problem of interest is to combine the summary statistics of samples for statistical inference about the parameter .the statistical analysis that combines the results of several independent used in clinical trails and behavioral sciences . if and then , and this problem is known as the common mean for several normal populations .there are some inference for this problem in statistical literature .for example see ; krishnamoorthy and lu ( 2003 ) , lin and lee ( 2005 ) .if and then and this is equivalent to problem of common mean of several lognormal populations .our interest in this paper is inference about this problem . forthe common lognormal mean , a few authors proposed approximate methods : ahmed et al ( 2001 ) proposed an estimator and approximate confidence interval for the common lognormal mean ; baklizi and ebrahem ( 2005 ) studied several types of large samples and bootstrap intervals ; gupta and li ( 2005 ) developed procedures for estimating the common mean and investigated the performance of the resulting confidence interval for two lognormal populations . in this paper , we first propose estimation of when the variances , are known .then two methods are given that are applicable for both hypothesis testing and interval estimation for , based on the concepts of generalized -value and generalized confidence interval .these methods are based on extending the method of krishnamoorthy and lu ( 2003 ) and the method of lin and lee ( 2005 ) , which are used for the problem of common mean of several normal populations .our methods also are applicable for the common mean of several lognormal for the interval mean of lognormal populations .this cahpter also is devoted to a short review regarding the existing method for inference of the common lognormal mean and application of our two methods for this problem .finally , we give a numerical example for the common lognormal mean and by monte carlo simulation , we compare the coverage probabilities , size and power of these methods for the common mean of two lognormal populations .let , , , where are constants and s are known .the estimator is umvue and mle for and the probability density function for is since the distribution of is from exponential family , in the form , then is umvue for and is umvue for ( see casella and berger , 1990 , page 263) it is easy to prove the rest of the theorem . if b=0 then is the best linear unbiased estimator for if i.e. is a lognormal variable , then is umvue for , but the mle of is if are unknown , then we can not find a closed form for mle s of ; we have to use a numerical approximationsuppose , , , where are constants . for the population ,let be the sample mean and sample variance . in this section , by using the idea of generalized -value and by extending ( i ) the method of krishnamoorthy and lu ( 2003 ) and ( ii ) the method of lin and lee ( 2005 ) , for the problem of common mean of normal populations , we give two generalized pivot variables for interval estimation and hypothesis testing for and we obtain two generalized -values for testing hypothesis it is clear that , .therefore , the generalized pivot variable for estimating based on the sample is where and , is the observed value of , .the generalized pivot variable for estimating based on the sample is given by where are independent random variables ( weerahandi , 1995 ) .the generalized variable that we want to propose is a weighted average of the generalized pivot variables in ( [ eq2.2.2 ] ) .the weights are inversely proportional to the generalized pivot variables in ( [ eq2.2.3 ] ) for the variances , and they are directly proportional to the sample sizes .( see krishnamoorthy and lu , 2003 ) .let and , with the observed values and , respectively .then , the generalized variable can be expressed as } { a\sum\limits_{j=1}^{k}\dfrac{n_{j}v_{j}}{(n_{j}-1)s_{j}^{2}}}-\mu \\ & = & \sum\limits_{i=1}^{k}w_{i}t_{i}^{\ast } -\mu , \nonumber\end{aligned}\ ] ] where the weights are the distribution of is an increasing function with respect to .therefore , the generalized -value for ( [ eq2.2.2 ] ) is given by this generalized -value can be well approximated by a monte carlo simulation using the following algorithm : [ alg2.1 ] for a given , and ( : for generate generate generate compute compute ( end loop ) let if , else . then is a monte carlo estimate of the generalized -value for ( [ eq2.2.5 ] ) . is a generalized pivot variable for and we can use that to obtain a generalized confidence interval for .if and then } { \sum\limits_{j=1}^{k}\dfrac{n_{j}v_{j}}{(n_{j}-1)s_{j}^{2}}}-\mu\ ] ] and this generalized variable is introduced by krishnamoorthy and lu ( 2003 ) for inference on the common mean of several normal populations . from theorem 1 , we have we know that is a generalized pivot variable for where let and , with the observed values and , respectively .we define a generalized variable for based on the umvue for in ( [ eq2.1.1 ] ) by the distribution of is an increasing function with respect to , and therefore the generalized -value for testing ( [ eq2.2.1 ] ) is , \nonumber\end{aligned}\ ] ] where and is distribution function of the standard normal variable and expectation is taken with respect to chi - square random variables with degrees of freedom .this generalized -value can be well approximated by a monte carlo simulation like the algorithm [ alg2.1 ] . in ( 2.10 ) is a generalized pivot variable for and we can use that to obtain a generalized confidence interval for . if and then which is a generalized variable , introduced by lin and lee ( 2005 ) , for the common mean of several normal populations . for testing the hypothesis of the form the -value is and can be rejected when .consider independent with lognormal distribution , for , , and assume that , where , i.e. , the lognormal populations have common mean therefore , we have , where , and to find a confidence interval for , it is enough to have a confidence interval for , and a hypothesis test for is equivalent to a hypothesis test for .for example the hypothesis test is equivalent to it is useful to review the existing methods for the problem of common lognormal mean .let , then a combined sample estimate of is given by where , and the estimator is asymptotically normal with mean and asymptotic variance , which can be estimated by therefore , a confidence interval for is the acceptance set for all is this is a quadratic function in whose two roots can be found directly .since the coefficient of in this expression is positive , it follows that the set of all values of between the two roots is the desired confidence interval .let be a vector of parameters , where and is the common mean .the joint log - likelihood function based on the log - transformed data of two independent log - normal populations is given by where let is mle for the asymptotic variance of is where and are mles for and a confidence interval for is in fact , the problem of common lognormal mean is a special case of our model when and thus , the generalized variable in ( [ eq2.2.4 ] ) becomes } { \sum\limits_{j=1}^{k}\dfrac{n_{j}v_{j}}{(n_{j}-1)s_{j}^{2}}}-\mu , \ ] ] and the generalized variable in ( [ eq2.2.7 ] ) becomes this section , we give a numerical example and compare our methods with other methods for the problem of common lognormal mean .the data come from the regenstrief medical record system ( rmrs ) ( mcdonald et al , 1988 ; zhou et al , 1997 ) on effects of race on medical charges of patients with type i diabetes who had received inpatient or outpatient care at least two occasions during the period from 1 january 1993 , through 30 june 1994 .the data set consists of 119 african american patients and 106 white patients . the mean medical charges and their corresponding variance for the african american and white groups are given in table [ table2.1 ] .gupta , r. c. and li , x. ( 2005 ) .statistical inferences on the common mean of two log - normal distributions and some applications in reliability , appeared in _ computational statistics and data analysis_. krishnamoorthy , k. and mathew , t. ( 2003 ) .inferences on the means of lognormal distributions using generalized p - values and generalized confidence interval , _ journal of statistical planning and inference _ , 115 , 103 - 121 . | a hypothesis testing and an interval estimation are studied for the common mean of several lognormal populations . two methods are given based on the concept of generalized p - value and generalized confidence interval . these new methods are exact and can be used without restriction on sample sizes , number of populations , or difference hypotheses . a simulation study for coverage probability , size and power shown that the new methods are better than the existing methods . a numerical example is given with some real medical data . kewwords : lognormal population , common mean , generalized variable , generalized p - value , generalized confidence interval . |
dusty circumstellar disks have been the focus of intense observational interest in recent years , largely because they are thought to be the birthplaces of planetary systems .these observational efforts have yielded many new insights on the structure and evolution of these disks . in spite of major developments inspatially resolved observations of these disks , much of our knowledge of their structure is still derived from spatially _un_resolved spectroscopy and spectral energy distributions ( seds ) .the interpretation of this information ( as well as spatially resolved data ) requires the use of theoretical models , preferentially with as much realism and self - consistency as possible .such disk models have been developed and improved over many years .when they are in reasonable agreement with observations they can also serve as a background onto which other processes are modeled , such as chemistry , grain growth , and ultimately the formation of planets .this chapter reviews the development of such self - consistent disk structure models , and discusses the current status of the field .we restrict our review to models primarily aimed at a comparison with observations. we will start with a concise resum of the formation and viscous evolution of disks ( section [ sec - viscevol ] ) .this sets the radial disk structure as a function of time .we then turn our attention to the vertical structure , under the simplifying assumption that the gas temperature equals the dust temperature everywhere ( section [ sec - diskstruct ] ) . while this assumption is valid in the main body of the disk , it breaks down in the disk surface layers .the formation of stars and planetary systems starts with the gravitational collapse of a dense molecular cloud core .since such a core will always have some angular momentum at the onset of collapse , most of the infalling matter will not fall directly onto the protostar , but form a disk around it while matter falls onto the disk , viscous stresses within the disk will transport angular momentum to its outer regions . as a consequence of this ,most of the disk matter moves inward , adding matter to the protostar , while some disk matter moves outward , absorbing all the angular momentum ( _ lynden - bell and pringle _ , ) . during its formation and evolutiona disk will spread out to several 100 au or more ( _ nakamoto and nakagawa _, , henceforth nn94 ; _ hueso and guillot _ , , henceforth hg05 ) .this spreading is only stopped when processes such as photoevaporation ( this chapter ) , stellar encounters ( _ scally and clarke _ , ; _ pfalzner et al ._ , ) or a binary companion ( _ artymowicz and lubow _ , ) truncate the disk from the outside . during the collapse phase , which lasts a few years, the accretion rate within the disk is very high ( ) , but quickly drops to once the infall phase is over ( nn94 , hg05 ) .the optical and ultraviolet excess observed from classical t tauri stars ( cttss ) and herbig ae / be stars ( haebes ) confirms that this on - going accretion indeed takes place ( _ calvet et al ._ , , and references therein ) . in fig .[ fig - hueso ] we show the evolution of various disk and star parameters. evolution of various disk and star quantities as a function of time after the onset of collapse of the cloud core ( after _ hueso and guillot _ , ) .solid lines : stellar mass ( upper ) and disk mass ( lower ) .dotted line : accretion rate _ in the disk_. , width=302 ] an issue that is still a matter of debate is what constitutes the viscosity required for disk accretion , molecular viscosity is too small to account for the observed mass accretion rates .turbulent and magnetic stresses , however , can constitute some kind of anomalous viscosity .the magnetorotational instability ( mri ) , present in weakly magnetized disks , is the most accepted mechanism to drive turbulence in disks and transport angular momentum outwards ( _ balbus and hawley _, ; _ stone and pringle _ , ; _ wardle _ , , and references therein ) .there is a disk region ( 0.2 4 au , for typical ctts disk parameters according to _ dalessio et al ._ , ) in which the ionization fraction is smaller than the minimum value required for the mri . neither thermal ionization ( requiring a temperature higher than 1000 k ) , cosmic ray ionization ( requiring a mass surface density smaller than 100 g/ ) ( _ jin _ , ; _ gammie _ , ) , nor x - rays ( _ glassgold et al ._ , , ) are able to provide a sufficient number of free electrons to have mri operating near the midplane . _gammie _ ( ) proposed a layered accretion disk model , in which a `` dead zone '' is encased between two actively accreting layers .the precise extent of this dead zone is difficult to assess , because the number density of free electrons depends on detailed chemistry as well as the dust grain size distribution , since dust grains tend to capture free electrons ( _ sano et al ._ , , and references therein ) .if the disk dust is like the interstellar dust , the mri should be inhibited in large parts of the disk ( _ ilgner and nelson _ , ) , though this is still under debate ( e.g. _ semenov et al ._ , ; ) .there are also other ( non - magnetic ) mechanisms for anomalous viscosity , like the baroclinic instability ( _ klahr and bodenheimer _ , ) or the shear instability ( _ dubrulle et al ._ , ) , which are still subject to some controversy ( see the recent review by _ gammie and johnson _ , ) .angular momentum can also be transferred by global torques , such as through gravitational spiral waves ( _ tohline and hachisu _, ; _ laughlin and bodenheimer _ , ; _ pickett et al . _ , and references therein ) or via global magnetic fields threading the disk ( _ stehle and spruit _ , ) ,possibly with hydromagnetic winds launched along them ( _ blandford and payne _ , ; _ reyes - ruiz and stepinski _ , ) .to avoid having to solve the problem of viscosity in detail , but still be able to produce sensible disk models , _ shakura and sunyaev _ ( ) introduced the `` -prescription '' , based on dimensional arguments . in this recipe the vertically averaged viscosity at radius is written as , where is the pressure scale height of the disk and is the isothermal sound speed , both evaluated at the disk midplane where most of the mass is concentrated .the parameter summarizes the uncertainties related to the sources of anomalous viscosity , and is often taken to be of the order of for sufficiently ionized disks . from conservation of angular momentum ,the mass surface density of a _ steady _ disk ( i.e. with a constant mass accretion rate ) , for radii much larger than the disk inner radius , can be written as . with , where is the keplerian angular velocity, we see that for , as shown by fu ori and ex lupi type outbursts ( _ gammie and johnson _, , and references therein ) .these outbursts can have various triggering mechanisms , such as thermal instability ( _ kawazoe and mineshige _ , ; _ bell and lin _ , ) ; close passage of a companion star ( _ bonnell and bastien _ ; _ clarke and syer _ , ) ; mass accumulation in the dead zone followed by gravitational instability ( _ gammie _ , ; _ armitage et al ._ , ) .disks are therefore quite time - varying , and constant steady disk models should be taken as zeroth - order estimates of the disk structure . given the challenges of understanding the disk viscosity from first principles , attempts have been made to find observational constraints on disk evolution ( _ ruden and pollack _, ; _ cassen _ , ; _ hartmann et al . _ , ; _ stepinski _ , ) .for example , _ hartmann et al . _ ( ) study a large sample of cttss and find a decline in mass accretion rate with time , roughly described as , which they compare to the analytic similarity solutions of _ lynden - bell and pringle _ ( ) for the expanding disk .with the radial structure following from accretion physics , as described above , the next issue is the vertical structure of these disks .many authors have modeled this with full time - dependent 2d/3d ( magneto / radiation- ) hydrodynamics ( e.g. , _ boss _ , , ; _ yorke and bodenheimer _ , ; _ fromang et al ._ , ) .while this approach is obviously , it suffers from large computational costs , and often requires strong simplifying assumptions in the radiative transfer to keep the problem tractable . for comparison to these modelsare therefore less practical .the main objective of the models described in this section is the determination of the density and temperature structure of the disk . for a given surface density , and a given temperature structure ( where is the vertical coordinate measured upward from the midplane ) the vertical density distribution can be readily obtained by integrating the vertical equation of hydrostatics : where with .since the main source of opacity is the dust , most models so far make the assumption that the gas temperature is equal to the dust temperature , the temperature of the disk is set by a balance between heating and cooling .the disk cools by thermal emission from the dust grains at infrared wavelengths .this radiation is what is observed as infrared dust continuum radiation from such disks .line cooling is only a minor coolant , and only plays a role for when gas and dust are thermally decoupled .dust grains can be heated in part by radiation from other grains in the disk .the iterative absorption and re - emission of infrared radiation by dust grains in the disk causes the radiation to propagate the disk in a diffusive way .viscous dissipation of gravitational energy in the disk due to accretion .once the temperature structure is determined , the sed can be computed .the observable thermal emission of a dusty disk model consists of three wavelength regions .the main portion of the energy is emitted in a wavelength range depending on the minimum and maximum temperature of the dust in the disk .we call this the `` energetic domain '' of the sed , which typically ranges from 1.5 m to about 100 m . at shorter wavelengththe sed turns over into the `` wien domain '' . at longer wavelengths the sed turns over into the `` rayleigh - jeans domain '' , a steep , nearly powerlaw profile with a slope depending on grain properties and disk optical depth ( ) ., width=302 ] is a perfectly flat disk being irradiated by the star due to the star s non - negligible size ( _ adams and shu _ , ; _ friedjung _ , ) .the stellar radiation impinges onto the flat disk under an irradiation angle ( with the stellar radius ) . neglecting viscous dissipation , the effective temperature of the diskis set by a balance between the irradiated flux ( with the stellar luminosity ) and blackbody cooling , which yields .the energetic domain of its sed therefore has a slope of with , follows from the that any disk with has an sed slope of .observations of cttss , however , show sed slopes typically in the range to 1 ( _ kenyon and hartmann _ , ) , i.e. much less steep .the seds of herbig ae / be stars show a similar picture , with a somewhat larger spread in ._ meeus et al ._ ( , henceforth m01 ) divide the seds of herbig ae / be stars into two groups : those with strong far - infrared flux ( called ` group i ' , having slope ... ) and those with weak far - infrared flux ( called ` group ii ' , having slope ... ) .all but the most extreme group ii sources have a slope that is clearly inconsistent with that of a flat disk .it was recognized by _ kenyon and hartmann _ ( ) that a natural explanation for the strong far - infrared flux ( i.e. shallow sed slope ) of most sources is a flaring geometry of the disk . the flaring geometry a significant portion of the stellar radiation at large radii where the disk is cool , where is the height above the midplane where the disk becomes optically thick to the impinging stellar radiation .a closer look at the physics of an irradiation - dominated disk ( be it flat or flared ) reveals that its surface temperature is generally higher than its interior temperature ( _ calvet et al ._ , ; _ malbet and bertout _ , ; ) . because of the shallow incidence angle of the stellar radiation , the _ vertical _ optical depth of this warm surface layer is very lowas a consequence , the thermal radiation from these surface layers produces dust features in _emission_. this is exactly what is seen in nearly all non - edge - on t tauri and herbig ae / be star spectra ( e.g. m01 ; _ kessler - silacci et al ._ , ) , indicating that these disks are nearly always dominated by irradiation .armed with the concepts of disk flaring and hot surface layers , a number of authors published detailed with direct applicability to observations .the aforementioned cg97 model ( with refinements described in _chiang et al ._ , ) is a two - layer model for the interpretation of seds and dust emission features from _ non - accreting _ ( ` passive ' ) disks ._ lachaume et al . _ ( ) extended it to include viscous dissipation .the models by _dalessio et al ._ ( ) solve the complete 1 + 1d disk structure problem including irradiation and viscous dissipation ( using the prescription ) .the main input parameters are a global ( constant ) mass accretion rate and .the surface density profile is calculated self - consistently . , width=302 ] vertical temperature distribution of an irradiated -disk at 1 au , for a fixed ( chosen to be that of a disk model with for ) , but varying , computed using the models of _ dalessio et al . _ ( ) ., width=302 ] models of describe the seds of cttss reasonably well .however , _dalessio et al ._ ( ) argue that they tend to slightly overproduce far - infrared flux and have too thick dark lanes in images of edge - on disks .they also show that the percentage of expected edge - on disks appears to be overpredicted .they suggest that dust sedimentation could help to solve this problem . _chiang et al . _ ( ) find similar results for a subset of their herbig ae / be star sample : the meeus group ii sources ( see also cg97 ) .they fit these sources by dust settling through a reduction of the disk surface height .self - consistent computations of dust sedimentation produce similar seds and confirm the dust settling idea ( _ miyake and nakagawa _, ; _ dullemond and dominik _ , , henceforth dd04b ; _ dalessio et al . _ , ) .the disk thickness and far - infrared flux can also be reduced by grain growth ( _ dalessio et al ._ , ; _ dullemond and dominik _ , ) . from comparing infrared and ( sub-)millimeter spectra of the same sources ( _ acke et al ._ , ) , it is clear that the ( sub-)millimeter usually require mm - sized grains in the outer regions of the disk , while infrared dust emission features clearly prove that the disk surface layers are dominated by grains no larger than a few microns ( see ) .it appears that a bimodal size distribution can fit the observed spectra : the very inner part of the disk is dust - free due to dust sublimation ( see for a discussion of this region ) . the dusty part of the disk can therefore be expected to have a relatively abrupt inner edge at about au for a 50 star ( scaling roughly with ) .if the gas inward of this dust inner rim is optically thin , which mostly the case ( _ muzerolle et al ._ , ) , then this dust inner rim is illuminated by the star at a degree angle , and is hence expected to be much hotter than the rest of the disk behind it . .this is a natural explanation , since dust sublimation occurs typically around 1500 k , and a 1500 k blackbody bump fits reasonably well to the near - infrared bumps in those sources . _tuthill et al . _ ( ) independently discovered a bright half - moon ring around the herbig be star lkha-101 , which they attribute to a bright inner disk rim due to dust sublimation ._ dullemond et al ._ ( ; henceforth ddn01 ) extended the cg97 model to include such a puffed - up rim , and _ dominik et al ._ ( ) showed that the meeus sample of herbig ae / be stars can be reasonably well fitted by this model .however , for meeus group ii sources these fits required relatively small disks ( see , however , section [ subsec-2dtrans ] ) .the initial rim models were rather simplified , treating it as a vertical blackbody ` wall ' ( ddn01 ) ._ isella and natta _ ( ) improved this by noting that the weak dependence of the sublimation temperature on gas density is enough to strongly round off the rim .rounded - off rims appear to be more consistent with observations than the vertical ones : .there is still a worry , though , whether the rims can be high enough to fit sources with a strong near - infrared bump . with near - infrared interferometry the rim can be spatially resolved , and thus the models can be tested .the measurements so far do not yet give images , but the measured ` visibilities ' can be compared to models .in this way one can measure the radius of the rim ( e.g. , _ monnier et al ._ , ; _ akeson et al ._ , ) and its inclination ( e.g. , _ eisner et al ._ , ). moreover it can test whether indeed the near - infrared emission comes from the inner rim of the dust disk in the first place ( some doubts have been voiced by _vinkovic et al ._ , ) .we refer to the for a more in - depth discussion of interferometric measurements of disks .the inner rim model has so far been mainly applied to herbig ae / be stars because the rim appears so apparent in the ( nir ) .but _ muzerolle et al . _( ) showed that it also applies to t tauri stars . in that case , however , the luminosity from the magnetospheric accretion shock is required in addition to the stellar luminosity to power the inner rim emission ., width=302 ] , width=302 ] the models described so far are all based on an approximate 1 + 1d ( or two - layer ) irradiation - angle description . in realitythe structure of these disks is 2-d , if axisymmetry can be assumed , and 3-d if it can not . over the last 10 yearsmany multi - dimensional dust continuum radiative transfer programs and algorithms were developed for this purpose ( e.g. , _ whitney et al ._ , ; _ lucy et al ._ , ; _ wolf et al ._ , ; _ bjorkman and wood _ , ; _ nicolinni et al ._ , ; _ steinacker et al ._ , ) .most applications of these codes assume a given density distribution and compute spectra and images .there is a vast literature on such applications which we will not review here ( see ) .but there is a trend to include the self - consistent vertical density structure into the models by iterating between radiative transfer and the vertical pressure balance equation ( _ nomura _ , ; _ dullemond _ , , henceforth d02 ; _ dullemond and dominik _ , , henceforth dd04a ; _ walker et al ._ , ) .although , there is an obvious interest in direct observations of the gas . these disks .moreover , it is important to estimate how long disks remain gas - rich , and whether this is consistent with the formation time scale of gas giant planets ( _ hubickyj et al ._ , ) .unfortunately , gas lines often probe those regions of disks in which the gas temperature is difficult to compute .the disk models we described above assume that the gas temperature in the disk is always equal to the local dust temperature .while this is presumably true for most of the matter deep within optically thick disks , in the tenuous surface layers of these disks ( or throughout optically thin disks ) the densities become so low that the gas will thermally decouple from the dust .the gas will acquire its own temperature , which is set by a balance between various heating- and cooling processes .these processes depend strongly on the abundance of various atomic and molecular species , which , for their part , depend strongly on the temperature .the gas temperature , density , chemistry , radiative transfer and radiation environment are therefore intimately intertwined and have to be studied as a .this greatly complicates the modeling effort , and the first models which study this in detail have only recently been published . on stationary models , i.e. models that are in chemical , thermal and hydrostatic equilibrium .for the tenuous regions of disks the chemical time scales are short enough that this is valid , in contrast to the longer chemical time scales deeper in the disk ( e.g. , _ aikawa and herbst _ , ; _ willacy et al ._ , ) .the models constructed so far either solve the gas temperature / chemistry for a _ fixed _ gas density structure , or include the gas density in the computation to obtain a self - consistent thermo - chemical - hydrostatic structure .the physics and chemistry of the surface layers of protoplanetary disks strongly resembles that of photon dominated regions ( pdrs , _ tielens and hollenbach _, ; _ yamashita et al . _ ) . in those the gas temperaturegenerally greatly exceeds the dust temperature . butthe dust - gas coupling gradually takes over the gas temperature balance as one gets deeper into the disk , typically beyond a vertical column depth of , the uppermost surface layer contains mostly atomic and ionized species , since the high uv irradiation effectively dissociates all molecules ( _ aikawa et al . _ , ) .the photochemistry is driven by the stellar irradiation and/or in case of nearby o / b stars , by external illumination . in flaring disk models, the stellar radiation penetrates the disk under an irradiation angle like the one described in the previous section .this radiation gets diluted with increasing distance from the central star and attenuated by dust and gas along an _ inclined _ path into the disk .the stellar radiation therefore penetrates less deep into the disk than external uv radiation .the thermal balance of the gas in disks is solved by equating all relevant heating and cooling processes . for this gas thermal balance equation ,a limited set of key atomic and molecular species is sufficient : e.g. , h , co , oh , h , . for most atoms and molecules , the statistical equilibrium equation has to include the pumping of the fine structure and rotational levels by the cosmic background radiation , from the optical depth of the line ( similar to the approach of _ tielens and hollenbach _ , for pdrs ) .the optical depth used for this escape probability is the line optical depth in the _ vertical _ direction where the photons most readily escape . of the most critical ingredients of these modelsis the uv and x - ray radiation field ( stellar and external ) , in the literature the far ultraviolet radiation field is often represented by a single parameter describing the integrated intensity between 912 and 2000 normalized to that of the typical interstellar radiation field .however , several papers have shown the importance of a more detailed description of the radiation field for calculations of the chemistry and the gas heating / cooling balance ( _ spaans et al . _ , ; _ kamp & bertoldi _ , ; _ bergin et al . _ , ; _ kamp et al . _ , ; ) .for instance , in t tauri stars the radiation field is dominated by strong ly emission , which has consequences for the photodissociation rate of molecules that can be dissociated by ly photons . the photoelectric heating process , on the other hand, depends strongly on the overall shape of the radiation field , which is much steeper in the case of cool stars .a similar problem appears in the x - ray spectra of cool m stars , which are dominated by line emission .induced grain photoelectric heating is often a dominant heating process for the gas in the irradiated surface layers . its efficiency and thus the final gas temperature depends strongly on the grain charge , dust grain size and composition ( pahs , silicates , graphites , ices , etc . ) .x - rays from the central star the uppermost surface layers , as this subsection focuses on of the optically thick disk at where of the dust surface area per hydrogen nucleus to the interstellar value , which is roughly . contained in the surface layer ( ) is usually small , , , width=302 ] the detailed temperature structure of the surface layers of optically thick young disks was studied for the first time by , , and .[ 100auslice ] shows the vertical structure in a disk model with 0.01 at 100 au around a 0.5 t tauri star .fine structure line cooling of neutral oxygen .molecules can shield themselves from the dissociating radiation .as soon as the fraction of molecular hydrogen becomes larger than 1% , h line cooling .molecular line emission cools the gas down to hundred k before the densities are high enough for gas and dust to thermally couple . to the cooling .instead co , which has a rich rotational spectrum at low temperatures , becomes an important coolant . at largerradii the from the central star drops as well as the . too low for the endothermic destruction of h by o atoms and hence the contains substantial fractions of molecular hydrogen. detailed models of the gas temperature have shown that gas and dust are collisionally coupled at optical depth .thus the basic assumption of the disk structure models presented in the previous section is justified .the pure rotational lines of h such as j = 2 0 s(0 ) [ 28 m ] , j = 3 1 s(1 ) [ 17 m ] , j = 4 2 s(2 ) [ 12 m ] and j = 6 4 s(4 ) [ 8 m ] trace the warm gas ( 100 - 200 k ) in the disks . .the detection of the mid - ir h lines at low spectral resolution ( e.g. , with spitzer ) is hindered by the low line - to - continuum ratio .the mid - infrared line spectra of molecular hydrogen from a t tauri disk model ( , , k , /yr ) with ( solid line ) and without ( dotted line ) uv excess .,width=302 ] , width=302 ] another tracer of the physics in the tenuous surface layers ( see also the chapter by ) .it has been detected in a number of externally illuminated proplyds in the orion nebula as well as in t tauri and herbig ae / be stars . explain by the photodissociation of the oh molecule , which leaves about 50% of the atomic oxygen formed in the upper level of the line .need oh abundances higher than those predicted from disk models to fit the emission from the disks around herbig ae / be stars .reveal the presence ( few 1000 k ) ; hence the [ oi ] line might arise partly from thermal excitation ., the dust grains grow to centimeter sizes and the disks become optically thin .in addition , as we shall discuss , the gas in the disk ultimately disappears , turning the disk into a debris disk .it is therefore theoretically conceivable that there exists a transition period in which the disk has become optically thin in dust continuum , but still contains a detectable amount of gas . measuring the gas mass in such transition disks sets a timescale for the planet formation process .the spitzer legacy science program ` formation and evolution of planetary systems ' ( feps ) has set upper limits on gas masses of around solar - type stars with ages greater than 10 myr ( _ meyer et al ._ , ; _ hollenbach et al ._ , ; _ pascucci et al ._ , ) .several groups have so far these transition phases of protoplanetary disks : modeled the disk structure and gas / dust emission from intermediate aged disks around low - mass stars , , , and modeled the gas chemistry and line emission from a - type stars such as pictoris and vega ._ jonkheid et al . _( ) studied the gas chemical structure and molecular emission in the disk around hd141569a .these models are all based on the same physics as outlined above for the optically thick protoplanetary disks .the disks are still in hydrostatic equilibrium , so that the disk structure in these low mass disks is similar to that in the more massive disks with the midplane simply removed .however , some fundamental differences remain : the minimum grain size in these disks is typically a few microns , much larger than in the young protoplanetary disks ; in addition , the dust may have settled towards the midplane , and much of the solid mass may reside in larger particles ( cm ) than can be currently observed .this reduces the grain opacity and the dust - to - gas mass ratio compared to the younger optically thick disks .optically thin to stellar uv and kev x - ray photons . at columns greater than , the gas opacity becomes large enough to shield h and co , allowing significant molecular abundances . for disks extended to 100 au , very little mass ( very roughly ) is needed to provide this shielding .detection can be significantly hampered by the low line - to - continuum ratio ( weak narrow line against the bright dust thermal background ) .these lines generally originate from 110 au . , width=302 ]have shown that beyond 40 au the dominant coolant for the latest tenuous stages of disk evolution is the [ cii ] 158 m line .the fine structure lines of c , o and c trace only the surface of these tenuous disks : . since typical gas temperaturesare higher than in molecular clouds , co lines from the upper rotational levels ( j = 4 3 ) are predicted to be stronger than the lower j lines . recently detected the co j = 3 2 line in hd141569 and disk modeling by _ jonkheid et al ._ ( ) shows that the profile excludes a significant contribution from gas inwards of au and estimate the total gas mass to .the above section has shown that in the surface layers of the disk the gas temperature can become very high , greatly exceeding the dust temperature .the warm surface gas can flow off the disk and escape the gravity of the star .since the heating process responsible for these high temperatures is the _ radiation _ from the central star or a nearby o - star , this process is called `` photoevaporation '' . the viscous evolution ( i.e. accretion and spreading ) of the disk , discussed in section [ sec - viscevol ] ,can be strongly affected by this photoevaporation process .typically , it significantly shortens the ` lifetime ' of a disk compared to pure viscous evolution .photoevaporation can also create inner holes or truncate the outer disk .this has relevance to observations of such disks , such as the percentage of young stars with infrared excess versus their age ( _ haisch et al ._ , ; _ carpenter et al ._ , ) , or the inferred ` large inner holes ' of some disks ( e.g. , _ calvet et al ._ , ; _ bouwman et al ._ , ; _ forrest et al ._ , ; _ dalessio et al ._ , ) .it has also far - reaching consequences for the formation of planets , as we will discuss below .photoevaporation has already been discussed in earlier reviews ( _ hollenbach et al ._ , ; _ hollenbach and adams _ , ; _ richling et al ._ , ) . however , these reviews mainly focused on the heating by a nearby massive star ( such as the famous case of the proplyds in orion ) .in contrast , in this section we will exclusively review recent results on photoevaporation by the central star , progress in this field since ppiv has been mostly theoretical , since observations of diagnostic gas spectral lines for the case of photoevaporation by the central , low mass star requires greater sensitivity , spectral resolution , and spatial resolution than currently available .we will , however , discuss the implications for the observed ` inner holes ' and disk lifetimes .photoevaporation results when stellar radiation heats the disk surface and resulting thermal pressure gradients drive an expanding hydrodynamical flow to space .as shown in section [ sec - surface ] the main heating photons lie in the fuv , euv and x - ray energy regimes .x - rays , however , were shown to be of lesser importance for photoevaporation ( _ alexander et al . _ , ) , and we will not consider them further .there are two main sources of the strong euv and fuv excesses observed in young low mass stars : accretion luminosity and prolonged excess chromospheric activity .recent work ( _ alexander et al . _ , ) has shown that euv photons do not penetrate accretion columns , so that accretion can not provide escaping euv photons to power photoevaporation . _alexander et al ._ ( ) present indirect observational evidence that an active chromosphere may persist in t tauri stars even without strong accretion , and that euv luminosities of photons / s may persist in low mass stars for extended ( yrs ) periods to illuminate their outer disks .fuv photons may penetrate accretion columns and also are produced in active chromospheres .they are measured in nearby , young , solar mass stars with little accretion and typically ( with great scatter ) have luminosity ratios or photons / s .euv photons ionize the hydrogen in the very upper layers of the disk and heat it to a temperature of k , independent of radius .fuv photons penetrate deeper into the disk and heat the gas to k , depending on the intensity of the fuv flux , the gas density and the chemistry ( as was discussed in section [ sec - surface ] ) . whether the euv or fuv heating is enough to drive an evaporative flow depends on how the resulting compares to the local escape speed from the gravitationally bound system .a characteristic radius for thermal evaporation is the `` gravitational radius '' , where the sound speed equals the escape speed : early analytic models made the simple assumption that photoevaporation occurred for , and that the warm surface was gravitationally bound for .however , a closer look at the gas dynamics shows that this division happens not at but at about ( _ liffman _, ; _ adams et al . _ , ; _ font et al . _ , ) , and that this division is not entirely sharp .in other words , photoevaporation happens _mostly _ outside of the `` critical radius '' , though a weak evaporation occurs inside of .since these are important new insights since ppiv , we devote a subsubsection on them below . with the critical radius for euv - induced photoevaporation is au . however , there is no fixed because the fuv - heated gas has temperatures that depend on fuv flux and gas density , i.e. , on and .therefore , depends on and , and may range from 3 - 150 au for solar mass stars .the evaporative mass flux depends not only on the temperature of the photon - heated gas , but also on the vertical penetration depth of the fuv / euv photons . for euv photons this is roughly set by the strmgren condition that recombinations in the ionized layer equal the incident ionizing flux . neglecting dust attenuation, this penetration column the penetration depth is an important quantity because it sets the density at the base of the photoevaporative flow : the deeper the penetration depth , the higher the density .the flux of outflowing matter is the product of local density and sound speed within this heated layer .this is why the complex surface structure models of section [ sec - surface ] are so important for fuv - driven photoevaporation .for euv - driven photoevaporation , on the other hand , the situation is less complicated , since the temperature in the ionized skin of the disk is independent of and , as long as , where is the bottom of the ionized layer , i.e. the base of the flow .for this simple case , the evaporative mass flux .although fuv - heated layers have lower temperatures than the euv - heated skin they are at higher densities and may equally well initiate the flow and determine the mass flux as euv photons ( see _ johnstone et al ._ , for a similar situation for externally illuminated disks ) .one way to understand why the disk can evaporate at radii as small as is to consider the evaporative flow as a bernoulli flow along streamlines ( _ liffman _, ; _ adams et al . _ , ) .these streamlines initially rise nearly vertically out of the disk and then bend over to become asymptotically radially outward streamlines .the gas at its base lies deep in the gravitational potential . as a simplificationlet us now treat these streamlines as if they are entirely radial streamlines ( ignoring their vertical rise out of a disk ) .then the standard atmospheric solution has a density that falls off from to roughly as exp .the gas flows subsonically and accelerates , as it slowly expands outward , until it passes through a sonic point at ( is the classic parker wind solution for zero rotation ) . for ,the mass flux is reduced considerably by the rapid fall - off of the density from to . for ,the mass flux is roughly given by the density at times the sound speed times the dilution factor that accounts for mass conservation between and : .assuming the same and at all , we see that and that .this demonstrates that for this simplified case , and that even for evaporation is weak , but not zero . in fig .[ 100auslice ] the base of the flow is marked with the large dot ( though that figure shows a static , non - evaporating model with only fuv heating ) . in that figure, is the temperature such that the sound speed equals the escape speed ; is roughly where the photoevaporation flow originates ( i.e. , where ) .although central star fuv models are not yet published , several central star euv models have appeared in the literature ._ hollenbach et al ._ ( ) first outlined the essential physics of euv - induced flows by the central star and presented an approximate analytic solution to the mass loss rate for a disk larger than .the basic physics is the strmgren relation , , where is the hydrogen recombination coefficient and is the electron density in the ionized surface gas .this sets the hydrogen nucleus ( proton ) number density at the base of the flow , and therefore an identical proportionality for the mass loss rate : in units of .radiation hydrodynamical simulations ( _ yorke and welz _ , ; _ richling and yorke _ , ) find a similar power - law index for the dependence of the mass - loss rate on the euv photon rate of the central star .this result applies for both high and low mass central stars , and is valid for a weak stellar wind .the effect of a strong stellar wind is such that the ram pressure reduces the scale height of the atmosphere above the disk and the euv photons are allowed to penetrate more easily to larger radii .this increases the mass - loss rate from the outer parts of the disk .it is noteworthy that the diffuse euv field , caused by recombining electrons and protons in the disk s ionized atmosphere inside , controls the euv - induced mass - loss rates ( _ hollenbach et al ._ , ) for disks .this effect negates any potential for self - shadowing of the euv by the disk .let us first assume a disk that does not viscously evolve : it just passively undergoes photoevaporation . for disks with size , the photoevaporation proceeds from outside in .the mass flux rate at is much higher than inside of , because the gas at is least bound .in addition , disk surface densities generally fall with ( see section [ sec - viscevol ] ) .therefore , the disk shrinks as it photoevaporates , and most of the mass flux comes from the outer disk radius .however , for disks with , two types of disk evolution may occur .for euv photons , _ hollenbach et al ._ ( ) showed that the mass flux beyond goes roughly as the timescale for complete evaporation at goes as .as long as does not drop faster than , the disk will evaporate first at , and , once a gap forms there , will then steadily erode the disk from this gap outwards .if , on the other hand , decreases with , then the disk shrinks from outside in as in the case .the photoevaporation by the fuv from the central star has not yet been fully explored , but preliminary work by gh06 suggests that the mass flux in the outer disks around solar mass stars _increases _ with . in this case, the disk evaporates from outside in for most generally assumed surface density laws , which decrease with .now let us consider a disk that is actively accreting onto the star ( see section [ sec - viscevol ] ) .in general , if the photoevaporation drills a hole somewhere in the disk or ` eats ' its way from outside in , the forces of viscous spreading tend to move matter toward these photoevaporation regions , which can accelerate the dissipation of the disk .if the disk has a steady accretion rate , then a gap forms once exceeds . since , the gap first forms at the minimum radius ( ) and then works its way outward ._ clarke et al ._ ( ) presented time - dependent computations of the evolution of disks around low mass stars with photons s .their model combines euv photoevaporation with a viscous evolution code .after to 10 years of viscous evolution relatively unperturbed by photoevaporation , the viscous accretion inflow rates fall below the photoevaporation rates at . at this point ,a gap opens up at and the inner disk rapidly ( on an inner disk viscous timescale of yr ) drains onto the central star or spreads to where it evaporates . in this fashion , an inner hole is rapidly produced extending to .evolution of the surface density of a euv - photoevaporating disk ( figure adapted from _ alexander et al . , _this simulation starts from a given disk structure of about 0.05 ( marked with ` start ' in the figure ) .initially the disk accretes and viscously spreads ( solid lines ) . at photoevaporation starts affecting the disk .once the euv - photoevaporation has drilled a gap in the disk at au , the destruction of the disk goes very rapidly ( dashed lines).the inner disk quickly accretes onto the star , followed by a rapid erosion of the outer disk from inside out .in this model the disk viscosity spreads to au ; however , fuv - photoevaporation ( not included ) will likely truncate the outer disk.,width=302 ] _ alexander et al ._ ( , ) extended the work of _ clarke et al ._ they show that once the inner hole is produced , the diffuse flux from the atmosphere of the inner disk is removed and the attenuation of the direct flux by this same atmosphere is also removed .this enhances the euv photoevaporation rate by the direct euv flux from the star , and the effect magnifies as the inner hole grows as , again derivable from a simple strmgren criterion .the conclusion is that the outer disk is very rapidly cleared once the inner hole forms . the rapid formation of a cleared out inner hole almost instantly changes the nature and appearance of the disk .the above authors near- to mid - infrared color ( in magnitudes ) versus 850 m flux for photoevaporation / viscous evolution models .the data are taken from _hartmann et al . _ ( ) and _ andrews and williams _ ( ) : 850 m detections ( circles ) and upper limits ( triangles ) are plotted for both cttss ( filled symbols ) and wttss ( open symbols ) .evolutionary tracks are shown for models with stellar masses 0.5 ( dashed ) , 1.0 ( solid ) , and 2.0 (dotted ) , at a disk inclination of to the line of sight .the thick tracks to the right and left show the 1 model at and , respectively .crosses are added every 1 myr to show the temporal evolution .initially the ( optically thin ) 850 m flux declines slowly at constant ( optically thick ) infrared color .however , once the viscous accretion rate falls below the photoevaporation rate , the disk is rapidly cleared from the inside - out .( figure adapted from _ alexander et al . _ . ) , width=264 ] _ matsuyama et al ._ ( ) pointed out that if the euv luminosity is created by accretion onto the star , then , as the accretion rate diminishes , the euv luminosity drops and the timescale to create a gap greatly increases .even worse , as discussed above , the euv photons are unlikely to escape the accretion column .only if the euv luminosity remains high due to chromospheric activity does euv photoevaporation play an important role in the evolution of disks around isolated low mass stars . _alexander et al ._ ( ) argue this is the case ._ ruden _ ( ) provides a detailed analytic analysis which describes the evolution of disks in the presence of viscous accretion and photoevaporation and compares his results favorably with these two groups . the processes which disperse the gas influence the formation of planets .small dust particles follow the gas flow .if the gas is dispersed before the dust can grow , all the dust will be lost in the gas dispersal and planetesimals and planets will not form . even if there is time for particles to coagulate and build sufficiently large rocky cores that can accrete gas ( _ pollack et al ._ , ; _ hubickyj et al ._ , ) , the formation of gas giant planets like jupiter and saturn will be suppressed if the gas is dispersed before the accretion can occur .furthermore , gas dispersal helps trigger gravitational instabilities that may lead to planetesimal formation ( _ goldreich and ward _, ; _ youdin and shu _ , ; _ throop et al . _ , ) , affects planet migration ( e.g. , _ ward _ , ) and influences the orbital parameters of planetesimals and planets ( _ kominami and ida _ , ) .( ) showed that with photons s , the early sun could have photoevaporated the gas beyond saturn before the cores of neptune and uranus formed , leaving them gas poor . however , this model ignored photoevaporation inside of . the current work by _ adams et al ._ ( ) would suggest rather rapid photoevaporation inside of 10 au , and make the timing of this scenario less plausible .fuv photoevaporation ( either from external sources or from the central star ) may provide a better explanation .preliminary results from gh06 suggest that the early sun did not produce enough fuv generally to rapidly remove the gas in the outer giant planet regions . _ adams et al . _ and _hollenbach and adams _ , ( ) discuss the external illumination case , . a number of observations point to the truncation of kuiper belt objects ( kbos ) beyond about 50 au ( e.g. , _allen , bernstein , and malhotra _, ; _ trujillo and brown _ , ) ._ adams _ _ et al . _ ( ) and _ hollenbach and adams _ ( , )show that photoevaporation by a nearby massive star could cause truncation of kbos at about 100 au , but probably not 50 au .the truncation is caused by the gas dispersal before the dust can coagulate to sizes which survive the gas dispersal , and which can then later form kbos .models of fuv photoevaporation by the early sun are needed . in young disks, dust settles toward the midplane under the influence of the stellar gravity and coagulates .once coagulated dust has concentrated in the midplane , the roughly centimeter - sized particles can grow further by collisions or by local gravitational instability ( _ goldreich and ward _ , ; _ youdin and shu _ , ) .a numerical model by _ throop and bally _ ( ) follows the evolution of gas and dust independently and considers the effects of vertical sedimentation and external photoevaporation .the surface layer of the disk becomes dust - depleted which leads to dust - depleted evaporating flows . because of the combined effects of the dust settling and the gas evaporating , the dust - to - gas ratio in the disk midplane is so high that it meets the gravitational instability criteria of _ youdin and shu _ ( ) , indicating that kilometer - sized planetesimals could spontaneously form .these results imply that photoevaporation may even trigger the formation of planetesimals .presumably , photoevaporation by the central star may also produce this effect .in this chapter we have given a brief outline of how disks form and viscously evolve , what their structure is , what their spectra look like in dust continuum and in gas lines , and how they might come to their end by photoevaporation and viscous dispersion .the disk structure in dust and gas is summarized in fig .[ fig - picto ] . evidently , due to the broadness of the topic we had to omit many important issues .for instance the formation of disks is presumably much more chaotic than the simple picture we have discussed . in recent years there is a trend to outfit even the observation - oriented workhorse models with ever more detailed physics .this is not a luxury , since the volume of observational data ( both spectral and spatial ) is increasing dramatically , as shown by various other chapters in this book .for instance , with observational information about dust growth and sedimentation in disks , it will be necessary to include realistic dust evolution models into the disk models . additionally , with clear evidence for non - axial symmetry in many disks ( e.g. , _ fukagawa et al ._ , ) modelers may be forced to abandon the assumption of axial symmetry .the thermal - chemical study of the gas in the disk surface layers is a rather new field , and more major developements are expected in the next few years , both in theory and in the comparison to observations .these new insights will also influence the models of fuv - photoevaporation , and thereby the expected disk lifetime . a crucial step to aim for in the futureis the unification of the various aspects discussed here .they are all intimitely connected together and mutually influence each other .such a unification opens up the perspective of connecting seemingly unrelated observations and thereby improving our understanding of the bigger picture .hollenbach d. and adams f. ( 2005 ) in _ star formation in the interstellar medium _( d. lin , d.johnstone , f. adams , d. neufeld , and e. ostriker , eds . ) , asp conference series , pp .3 , provo : pub . astr . | = 11pt = 0.65 in = 0.65 in |
on a complete probability space , we consider the following stochastic process in defined by where is a standard brownian motion , is a poisson process with intensity independent of , and we denote by the compensated poisson process .the parameters are unknown and and are closed intervals of and , where .let denote the natural filtration generated by and .we denote by the probability law induced by the -adapted cdlg stochastic process starting at , and by the expectation with respect to .let and denote the convergence in -probability and in -law , respectively . for , we consider an equidistant discrete observation of the process which is denoted by , where for , and .we assume that the high - frequency observation condition holds .that is , let denote the density of the random vector under the parameter . for ,set .the aim of this paper is to prove the following lan property .[ theorem ] assume condition .then , the lan property holds for all with rate of convergence and asymptotic fisher information matrix .that is , for all , as , where is a centered -valued gaussian vector with covariance matrix theorem [ theorem ] extends in the linear case and in the presence of jumps the results of gobet in and for multidimensional continuous elliptic diffusions .the main idea of these papers is to use the malliavin calculus in order to obtain an expression for the derivative of the log - likelihood function in terms of a conditional expectation .some extensions of gobet s work with the presence of jumps are given for e.g. in , , and .however , in the present note , we estimate the coefficients and jump intensity parameters at the same time . the main motivation for this paper is to show some of the important properties and arguments in order to prove the lamn property in the non - linear casewhose proof is non - trivial . in particular , we present four important lemmas of independent interest which will be key elements in dealing with the non - linear case . the key argument consists in conditioning on the number of jumps within the conditional expectation which expresses the transition density and outside it .when these two conditionings relate to different jumps one may use a large deviation principle in the estimate .when they are equal one uses the complementary set .within all these arguments the gaussian type upper and lower bounds of the density conditioned on the jumps is again strongly used .this idea seems to have many other uses in the set - up of stochastic differential equations driven by a brownian motion and a jump process .we remark here that a plain it - taylor expansion would not solve the problem as higher moments of the poisson process do not become smaller as the expansion order increases .in this section we introduce the preliminary results needed for the proof of theorem [ theorem ] . in order to deal with the likelihood ratio in theorem [ theorem ], we will use the following decomposition for each of the above terms we will use a mean value theorem and then analyze each term .we start as in gobet applying the integration by parts formula of the malliavin calculus on each interval ] , and \overset{{\mathrm{p}}}{\longrightarrow } 0 ] , set .applying the markov property and proposition [ prop1 ] to each term in ( [ eq : dec ] ) , we obtain that \ell , \end{split}\ ] ] -\frac{1}{\sigma(\ell ) } \right)d\ell , \end{split}\ ] ] and \ell .\end{split}\ ] ] now using equation , we obtain the following expansion of the log - likelihood ratio where \ell\right ) , \\ \eta_{k , n}&:=\dfrac{v}{\sqrt{n}}\int_0 ^ 1\dfrac{1}{\delta_n}\left(\dfrac{\sigma^2}{\sigma(\ell)^3}\left(b_{t_{k+1}}-b_{t_{k}}\right)^2-\dfrac{\delta_n}{\sigma(\ell)}\right)d\ell,\\ m_{k , n}&:=\dfrac{v}{\sqrt{n}}\int_0 ^ 1\dfrac{1}{\delta_n}\dfrac{1}{\sigma(\ell)^3}\bigg\{\left(\theta\delta_n+\widetilde{n}_{t_{k+1}}^{\lambda}-\widetilde{n}_{t_{k}}^{\lambda}\right)^2 + 2\sigma\left(b_{t_{k+1}}-b_{t_{k}}\right)\left(\theta\delta_n+\widetilde{n}_{t_{k+1}}^{\lambda}-\widetilde{n}_{t_{k}}^{\lambda}\right)\\ & \qquad\qquad-{\mathrm{e}}_{x_{t_k}}^{\theta_n,\sigma(\ell),\lambda_n } \bigg[\left(\theta_n\delta_n+\widetilde{n}_{t_{k+1}}^{\lambda_n}-\widetilde{n}_{t_{k}}^{\lambda_n}\right)^2\\ & \qquad\qquad+2\sigma(\ell)\left(b_{t_{k+1}}-b_{t_{k}}\right)\left(\theta_n\delta_n+\widetilde{n}_{t_{k+1}}^{\lambda_n}-\widetilde{n}_{t_{k}}^{\lambda_n}\right ) \bigg\vert x_{t_{k+1}}^{\theta_n,\sigma(\ell),\lambda_n}=x_{t_{k+1}}\bigg]\bigg\}d\ell , \\ \beta_{k , n}&:=-\dfrac{w}{\sqrt{n\delta_n}}\dfrac{1}{\sigma^2}\left(\sigma\left(b_{t_{k+1}}-b_{t_{k}}\right)+\dfrac{w\delta_n}{2\sqrt{n\delta_n}}-\dfrac{u\delta_n}{\sqrt{n\delta_n}}\right)\\ & \qquad+\dfrac{w}{\sqrt{n\delta_n}}\int_0 ^ 1{\mathrm{e}}_{x_{t_k}}^{\theta_n,\sigma,\lambda(\ell)}\left[\dfrac{\widetilde{n}_{t_{k+1}}^{\lambda(\ell)}-\widetilde{n}_{t_k}^{\lambda(\ell)}}{\lambda(\ell)}\bigg\vert x_{t_{k+1}}^{\theta_n,\sigma,\lambda(\ell)}=x_{t_{k+1}}\right]d\ell,\\ r_{k , n}&:=\dfrac{w}{\sqrt{n\delta_n}}\dfrac{1}{\sigma^2}\int_0 ^ 1\left(\widetilde{n}_{t_{k+1}}^{\lambda(\ell)}-\widetilde{n}_{t_k}^{\lambda(\ell ) } -{\mathrm{e}}_{x_{t_k}}^{\theta_n,\sigma,\lambda(\ell)}\left [ \widetilde{n}_{t_{k+1}}^{\lambda(\ell)}-\widetilde{n}_{t_k}^{\lambda(\ell ) } \bigg\vert x^{\theta_n,\sigma,\lambda(\ell)}_{t_{k+1}}=x_{t_{k+1}}\right]\right)d\ell . \end{split}\ ] ] we next show that the random variables are the terms that contribute to the limit in theorem [ theorem ] , and and are the negligible contributions . indeed , using girsanov s theorem and lemma [ lemma7 ], we can show that the conditions of lemma [ zero ] under hold for each term and .that is , [ lemma1 ] assume condition .then , as , \overset{{\mathrm{p}}_x^{\theta,\sigma,\lambda}}{\longrightarrow}-\dfrac{u^2}{2\sigma^2}-\dfrac{v^2}{2}\dfrac{2}{\sigma^2}-\dfrac{w^2}{2\sigma^2}\left(1+\dfrac{\sigma^2}{\lambda}\right)+\dfrac{uw}{\sigma^2}\\ & \sum_{k=0}^{n-1}\left({\mathrm{e}}^{\theta,\sigma,\lambda}\left[\xi_{k , n}^2+\eta_{k , n}^2+\beta_{k , n}^2\vert \mathcal{f}_{t_k}\right]-{\mathrm{e}}^{\theta,\sigma,\lambda}\left[\xi_{k , n}\vert\mathcal{f}_{t_k}\right]^2-{\mathrm{e}}^{\theta,\sigma,\lambda}\left[\eta_{k , n}\vert \mathcal{f}_{t_k}\right]^2-{\mathrm{e}}^{\theta,\sigma,\lambda}\left[\beta_{k , n}\vert \mathcal{f}_{t_k}\right]^2\right ) \\ & \qquad \qquad\qquad \qquad\overset{{\mathrm{p}}_x^{\theta,\sigma,\lambda}}{\longrightarrow}\dfrac{u^2}{\sigma^2}+2\dfrac{v^2}{\sigma^2}+\dfrac{w^2}{\sigma^2}\left(1+\dfrac{\sigma^2}{\lambda}\right)\\ & \sum_{k=0}^{n-1 } { \mathrm{e}}^{\theta,\sigma,\lambda}\left[\xi_{k , n}^4+\eta_{k , n}^4+\beta_{k , n}^4\vert \mathcal{f}_{t_k}\right]\overset{{\mathrm{p}}_x^{\theta,\sigma,\lambda}}{\longrightarrow}0 \\ & \sum_{k=0}^{n-1}\left({\mathrm{e}}^{\theta,\sigma,\lambda}\left[\xi_{k , n}\eta_{k , n}\vert \mathcal{f}_{t_k}\right]-{\mathrm{e}}^{\theta,\sigma,\lambda}\left[\xi_{k , n}\vert \mathcal{f}_{t_k}\right]{\mathrm{e}}^{\theta,\sigma,\lambda}\left[\eta_{k , n}\vert \mathcal{f}_{t_k}\right]\right)\overset{{\mathrm{p}}_x^{\theta,\sigma,\lambda}}{\longrightarrow}0\\ & \sum_{k=0}^{n-1}\left({\mathrm{e}}^{\theta,\sigma,\lambda}\left[\xi_{k , n}\beta_{k , n}\vert \mathcal{f}_{t_k}\right]-{\mathrm{e}}^{\theta,\sigma,\lambda}\left[\xi_{k , n}\vert \mathcal{f}_{t_k}\right]{\mathrm{e}}^{\theta,\sigma,\lambda}\left[\beta_{k , n}\vert \mathcal{f}_{t_k}\right]\right)\overset{{\mathrm{p}}_x^{\theta,\sigma,\lambda}}{\longrightarrow}-\dfrac{uw}{\sigma^2}\\ & \sum_{k=0}^{n-1}\left({\mathrm{e}}^{\theta,\sigma,\lambda}\left[\eta_{k , n}\beta_{k , n}\vert \mathcal{f}_{t_k}\right]-{\mathrm{e}}^{\theta,\sigma,\lambda}\left[\eta_{k , n}\vert \mathcal{f}_{t_k}\right]{\mathrm{e}}^{\theta,\sigma,\lambda}\left[\beta_{k , n}\vert \mathcal{f}_{t_k}\right]\right)\overset{{\mathrm{p}}_x^{\theta,\sigma,\lambda}}{\longrightarrow}0 .\end{aligned}\ ] ] finally , lemma [ clt ] applied to concludes the proof of theorem [ theorem ] .second author acknowledges support from the european union programme fp7-people-2012-cig under grant agreement 333938 .third author acknowledges support from the lia cnrs formath vietnam and the program arcus mae / idf vietnam . | in this paper , we consider a linear model with jumps driven by a brownian motion and a compensated poisson process , whose drift and diffusion coefficients as well as its intensity are unknown parameters . supposing that the process is observed discretely at high frequency we derive the local asymptotic normality ( lan ) property . in order to obtain this result , malliavin calculus and girsanov s theorem are applied in order to write the log - likelihood ratio in terms of sums of conditional expectations , for which a central limit theorem for triangular arrays can be applied . 0.5 * rsum * 0.5*la proprit lan pour un modle linaire avec sauts . * dans cet article , nous considrons un modle linaire avec sauts dirig par un mouvement brownien et un processus de poisson compens do nt les coefficients et lintensit dpendent de paramtres inconnus . supposant que le processus est observ haute frquence , nous obtenons la proprit de normalit asymptotique locale . pour cela , le calcul de malliavin et le thorme de girsanov sont appliqus afin dcrire le logarithme du rapport de vraisemblances comme une somme desprances conditionnelles , pour laquelle un thorme centrale limite pour des suites triangulaires peut tre appliqu . , , |
the study of difference equations ( des ) is an interesting and useful branch of discrete dynamical systems due to their variety of behaviors and ability to modelling phenomena of applied sciences ( see and the references therein ) .the standard framework for this study is to consider iteration functions and sets of initial conditions in such a way that the values of the iterates belong to the domain of definition of the iteration function and therefore the solutions are always well defined .for example , in rational difference equations ( rdes ) a common hypothesis is to consider positive coefficients and initial conditions also positive , see .+ such kind of restrictions are also motivated by the use of de as applied models , where negative initial conditions and/or parameters are usually meaningless , . + but there is a recent interest to extend the known results to a new framework where initial conditions can be taken to be arbitrary real numbers and no restrictions are imposed to iteration functions .it is in this setting where appears the forbidden set of a de , the set of initial conditions for which after a finite number of iterates we reach a value outside the domain of definition of the iteration function .indeed , the central problem of the theory of des is reformulated in the following way : + _ given a de , to determine the good ( ) and forbidden ( ) sets of initial conditions . for points in the good set , to describe the dynamical properties of the solutions generated by them : boundedness , stability , periodicity , asymptotic behavior , etc . _+ in this paper we are interested in the first part of the former problem : how can the forbidden set of a given de of order be determined ? in the previous literature to describe such sets , when it is achieved , is usually interpreted as to be able to write a general term of a sequence of hypersurfaces in . but those cases are precisely the corresponding to de where it is also possible to give a general term for the solutions .unfortunately there are a little number of des with explicitly defined solutions .hence we claim that new qualitative perspectives must be assumed to deal with the above problem .therefore , the goals of this paper are the following : to organize several techniques used in the literature for the explicit determination of the forbidden set , revealing their resemblance in some cases and giving some hints about how they can be generalized .thus we get a long list of des with known forbidden set that can be used as a frame to deal with the more ambitious problem of describe the forbidden set of a general de .we review and introduce some methods to work also in that case . and finally we propose some future directions of research . + the paper is organized as follows : after some preliminaries , we review the riccati de , which is one of the few examples of de where the former explicit description is possible . as far as we know ,almost all the literature where the forbidden set is described using a general term includes some kind of semiconjugacy with a riccati de .des obtained via a change of variables or topological semiconjugacy are the topic of the rest of the section . in the followingwe will discuss how algebraic invariants can be used to transform a given equation into a riccati or linear one depending upon a parameter , and therefore determining its forbidden set .after we will deal with an example , found in , of description where the elements of the forbidden set are given recurrently but in explicit form .we introduce a symbolic description of complex and real points of in section [ sec : symbolic ] , whereupon in section [ sec : qualitative ] we study some additional ways to deal with the forbidden set without an explicit formula .we finalize with a list of open problems and conjectures .+ to avoid an overly exposition , we have omitted some topics as the study of systems of difference equations ( sdes ) , ( see ) , the use of normal forms ( see ) , the systematic study of forbidden sets in the globally periodic case and the importance of forbidden sets in lorenz maps .in this paper we deal with difference equations ( des ) and systems of difference equations ( sdes ) .general definitions of these concepts are the following .+ let be a nonempty set and a nonempty subset of .let be a map .a de of order is the formal expression ( [ eq : deorder1 ] ) represents a set of finite or infinite sequences of , as given , is constructed by recurrence using ( [ eq : deorder1 ] ) , that is , and if then .when this process can be repeated indefinitely then and is named the solution of ( [ eq : deorder1 ] ) generated by the initial condition .+ we call the _ forbidden set ( fs ) _ of ( [ eq : deorder1 ] ) , and _ the good set ( gs ) _ of ( [ eq : deorder1 ] ) . +a de of order is where now is the iteration map , and the fs is defined as in a similar way , we define a sde of order as provided , . + system ( [ eq : sdeorder1 ] ) can be expressed as de of order of type ( [ eq : deorder1 ] ) using the vectorial notation and considering the map whose components are . + finally , a sde of order is defined as a set of equations using maps for . + in vectorial formthe set of equations ( [ eq : sdeorderk ] ) can be rewritten as in ( [ eq : deorderk ] ) with .+ the former definitions depend on how is given the domain of definition of the iteration map . to remark that point ,let s consider and the real de which forbidden set is when is defined as meanwhile is empty when is the natural domain of the constant function . +to avoid degenerated or trival cases , some further restrictions must be imposed to .natural and common restrictions consist usually in regarding which is the domain of definition of the iteration map .therefore , in equation we say that as every solution with is 2-periodic and is not defined when .it is implicitly assumed that .+ we remark that ( [ eq:1overx ] ) is also a de over , the projective line , and then being every solution 2-periodic if using the rules , .+ analogously , the de has in , when ( [ eq:1overx^2 + 1 ] ) is taken over the complex field and is again empty in the projective plane . + in practical applications an undefined zero division means that the denominator belongs to certain neighbourhood of zero .therefore in de as where , are real polynomials , the forbidden set problem could be studied using for a machine - value .+ in applied models where only positive values of the variables have practical meaning , could be .+ as a final example of different ways to consider the set , let s recall the de associated to a lorenz map .let be the unit interval ] b. and c. is topologically expansive , i.e. , there exists such that for any two solutions and of ( [ eq : deorder1 ] ) not containing the point , there is some with , and if some of the former solutions contains the inequality remains valid taking or . condition ( iii ) is equivalent to say that the preimages of the point are dense in , or , in the forbidden set notation , to say that .+ it is obvious that can be arbitrarily defined in ( without bilateral continuity ) and in that case . in the standard definitionit is assumed that .+ lorenz maps are an important tool in the study of the lorenz differential equations and the lorenz attractor , and also in the computation of topological entropy in real discontinuous maps .see and the references therein .+ in the following we deal mostly with des where iteration functions are quotients of polynomials , known as rational difference equations ( rdes ) .let s briefly recall some well known results about riccati des .let such that . a riccati de of order is special cases occur for particular values of the parameters . if , or the equation is linear , constant or globally of period respectively ( see ) .if none of those conditions stand , an affine change of variables transforms ( [ eq : riccatiorder1 ] ) into being the riccati number .finally leads to the linear de .the closed form solution of the linear de ables to write a closed form solution for ( [ eq : riccatiorder1_sf ] ) .moreover , there is a correspondence between the solutions of the linear de containing the zero element and the finite or forbidden set generated solutions of ( [ eq : riccatiorder1_sf ] ) .this is the idea behind the characterization of the fs of ( [ eq : riccatiorder1 ] ) that can be found , for example , in . from a topological point of view, that characterization shows that is a convergent sequence or a dense set in or a finite set .moreover , in the last case equation ( [ eq : riccatiorder1_sf ] ) is globally periodic ( see ) .+ analogously , for the second order riccati de we can transform it into a linear one using the change of variables .the forbidden set is then described as ( see ) where coefficients depend on the roots of the characteristic equation . roughly speaking is a countable union of plane hyperbolas convergent to a limit curve in some cases , dense in open sets in other cases and even a finite collection when the de is globally periodic .+ our first proposal ( open problem [ op : qualitativedescriptionriccati ] ) is to clarify which kind of topological objects can be obtained in the riccati de of second order and to generalize to the riccati de of order we claim also ( conjecture [ conj : riccatigloballyperiodic ] ) that de ( [ eq : riccatiorderk ] ) is globally periodic if and only if is a finite collection of hypersurfaces in , generalizating what happens in the case of order 1 ( see ) . to study the higher order riccati de see also . + in the former discussion , it was important to determine the set of initial conditions whose solutions in the linear equation included the zero element .this is called the zero set of the linear equation , .for example , in the riccati de of order , is defined as the set of points such that the solution of the linear de includes the zero element . but and depend on the initial condition of ( [ eq : riccatiorder1_sf ] ) via the change of variables .we can write the dependence as , .therefore , can be expressed as generalizing to the riccati de of order ( [ eq : riccatiorderk ] ) , we get where are the terms of the associated linear equation of order .+ note that the semiconjugacy formula has zero as a pole .the forbidden can be regarded as the transformation of certain special set of the linear equation corresponding to the singularities of the semiconjugacy .+ there is an increasing number of works in the recent literature that use the closed form solution of the riccati equation to describe the forbidden set of other rational difference equations .let s review and complete some of them .+ the order de can be transformed into the linear form using the semiconjugacy .the forbidden set is a sequence of planes in .the case and in ( [ eq : abozeid_1 ] ) was developed in , while , and is in .see also for and . + a similar change of variables , , gives the former linear equation when it is applied to the following de of order , see . we propose a generalization of this problem remarking that the following de becomes with ( open problem [ op : abozeid_generalized ] ) .+ in , the change gives the riccati equation when we apply it to the authors explicitly describe as a sequence of plane hyperbolas .+ equation ( [ eq : bajoliz2011 ] ) admits several generalizations .if we use the change of variables applied to the riccati de , we obtain a rde of order whose forbidden set can be described using that of the riccati de . in this casewe have a rde of third degree .a more interesting family of equations , where the degree remains equal to two , is where is a natural number .remark that for we get equation ( [ eq : bajoliz2011 ] ) . using the change of variables , ( [ eq : bajoliz2011_generalized2 ] ) leads to that it is not a riccati difference equation , but can still be reduced to a linear form .indeed , if with an affine change we transform it into we can introduce to get a linear de of order ( see open problem [ op : bajoliz2011_generalized ] ) . + in , it is shown that is a family of straight lines in in the case of equation here we get the riccati form via the change of variables .+ given a possible generalization is since reduce ( [ eq : mcgrath2006_generalized ] ) to that can be linearized to an equation of order as we do for ( [ eq : bajoliz2011_generalized2 ] ) ( open problem [ op : mcgrath2006_generalized ] ) .+ the difference equation of reference becomes with the change .generalizing , in the rde of order is reduced to the riccati form when we use the multiplicative change of variables .+ let s generalize the fs characterization of to the case where are arbitrary complex numbers .if one of them is zero , then equation ( [ eq : shojaei2011 ] ) becomes trivial or globally periodic and is empty or the set of points with at least one zero component .+ if , let .therefore ( [ eq : riccati_shojaei ] ) transforms into the former equation admits the following closed form solution and from here it is easy to give the forbidden set expression . if , then and .if is another root of the unity , then the equation is globally periodic and the forbidden set of the riccati equation is finite : supposing to be the smallest positive integer such that .+ when we apply these results to ( [ eq : shojaei2011 ] ) , we get that that is , a countable union of generalized hyperbolas when is not a root of the unity ( ) or a finite union if . +there are a number of rdes for which the closed form solution and fs is given in .those equations can be grouped in three categories : * those obtained from the multiplicative de when we do change of variables as we deal then with rde of order and degree .it is not difficult to obtain the closed form of ( [ eq : rhouma_multiplicative ] ) from which the fs expression is constructed .* rde of order and degree resulting of the introduction of the variable in the general linear de * and another family of rde of order and degree , given in this case by the change applied to the riccati de of order in each of the former cases we propose some generalizations . given a mbius transformation ,it must be possible to explicitly determine the fs of the rde constructed with the change applied to equation ( [ eq : rhouma_multiplicative ] ) or applied to equation ( [ eq : rhouma_linear ] ) .these are the claims of open problems [ op : rhouma_multiplicative ] and [ op : rhouma_linear ] respectively . in the third case, we have that the closed form solution of ( [ eq : rhouma_riccati ] ) and the change of variables able to write moreover , the relationship can be interpreted as a nonautonomous linear de when solved for , that is therefore a general expression for and for can be computed , and of course , the same idea must work for every change of variables where the explicit solution of the nonautonomous equation is known , and for every equation whose closed form is also known ( open problem [ op : rhouma_generalized ] ) .an interesting modification of the former ideas is the use of invariants to describe the fs of some rdes . consider the following example from where .this equation has the following invariant that is , for every solution of ( [ eq : palladino_1 ] ) there exist such that ( [ eq : palladino_1_invariant ] ) holds for every .+ the presence of an invariant ables to write an alternative form of the de . indeed , solving ( [ eq : palladino_1_invariant ] ) for and changing the indices , we get the following riccati de where recall that constant depend on the values of and .this is the basic idea to describe the fs of ( [ eq : palladino_1 ] ) .+ note that formula ( [ eq : palladino_1_invariant ] ) implies the identity from where ( [ eq : palladino_1 ] ) is deduced .+ a generalization of the former remark is the following .let and be two mbius transformations , , , and consider the invariant therefore and from here we get a rde of order whose fs could be described with the former methodology as the invariant ( [ eq : palladino_generalized_invariant ] ) implies also that which is a riccati de of order ( open problem [ op : palladino_generalized ] ) .for example , given and non zero complex numbers , the invariant leads to de and the invariant to de both of them not included in reference .+ also , given , the invariant produces the de that can be studied in the same way because the dependent de is reduced to a linear equation of order using the same changes as in equation ( [ eq : bajoliz2011_generalized2 ] ) . +another example of use of invariants is in .de verifyes that is constant over each solution of ( [ eq : aghajanishouli ] ) .therefore , from the equality the following linear relation is deduced and from here it is easy to give the closed and forbidden set of ( [ eq : aghajanishouli ] ) . in particular , the fs is a possible generalization is to consider invariants of the form as every solution will be associated to a linear recurrence ( open problem [ op : aghajanishouli ] ) .+ finally , a general question is to determine which relationship exists between the poles and zeros of an algebraic invariant , from one side , and the elements of its associated de , from other .one of the oldest examples in the literature concerning the forbidden set problem is in .let be a real number , and consider the following rde of order map is the unfolding of ( [ eq : camouzisdevault ] ) as we have that , .let be its inverse map , and .therefore this is an obvious characterization of the fs as the set of inverse orbits of poles of the iteration map that can be improved as follows .let s consider the subset and let be the sequence of functions defined inductively using the operator in the following way therefore we have [ th : theforbiddenset_cdv ] let be the subset of the forbidden set of ( [ eq : camouzisdevault ] ) defined by .then in general , given a rde we can describe its fs as a set of implicitly defined hypersurfaces . in the second order case we get a set of implicitly defined curves .for example , in the case of pielou s equation we can consider the unfolding map such that . by iterating map we get from where the following first forbidden curves are deduced ( see figure [ fg : pielou ] ) : {pielour1.eps } & \includegraphics[width=0.45\textwidth]{pielour2.eps}\\ \includegraphics[width=0.45\textwidth]{pielourm1.eps } & \includegraphics[width=0.45\textwidth]{pielourm2.eps } \end{array} ] , and converges either to or to a -cycle .b. suppose that is an increasing map , , and .then is contained in and converges monotonically to .[ th : sedaghat_monotonicmapwithpole ] remark that there are some cases not included in this result .when is increasing and , cobweb diagrams show that is a subset of ] theorem [ th : sedaghat_monotonicmapwithpole ] applies not only to equation ( [ eq : sedaghat_qualitative_1 ] ) , but also to equations as : + and , in general to des of the form where and is a bijection such that .+ the fs of ( [ eq : sedaghat_qualitative_2 ] ) is given in in terms of the values of and .we propose to make a similar study for the family of de where is an odd rational and is the mbius transformation with ( open problem [ op : sedaghat_qualitative_1_generalized ] ) .+ in the case of equation ( [ eq : sedaghat_qualitative_2 ] ) , in there is a description of the fs curves in an explicit form using a recurrent algorithm , in a similar way as it was made in section [ sec : fs_explicitform ] .+ additional examples of estimative approaches where the fs is graphically represented can be found in for the following des in a similar way , we can estimate the fs of for differents values of parameters and .see figure [ fg : devaultgalminasjanowskiladas ] and open problem [ op : devaultgalminasjanowskiladas ] .{fsdevaultetala1b5.eps } & \includegraphics[width=0.45\textwidth]{fscurvesdevaultetala1b5.eps}\\ \includegraphics[width=0.45\textwidth]{fsdevaultetala1b1.eps } & \includegraphics[width=0.45\textwidth]{fscurvesdevaultetala1b1.eps}\\ \includegraphics[width=0.45\textwidth]{fsdevaultetalam1bm1.eps } & \includegraphics[width=0.45\textwidth]{fscurvesdevaultetalam1bm1.eps } \end{array}$ ]clarify which kind of topological objects can be obtained in the second order riccati de and generalize to the riccati de of order ( [ eq : riccatiorderk ] ) .[ op : qualitativedescriptionriccati ] de ( [ eq : riccatiorderk ] ) is globally periodic if and only if is a finite collection of hypersurfaces in .[ conj : riccatigloballyperiodic ] describe the fs of rde ( [ eq : abozeid_generalized ] ) .[ op : abozeid_generalized ] describe the fs of rdes ( [ eq : bajoliz2011_generalized1 ] ) and ( [ eq : bajoliz2011_generalized2 ] ) .[ op : bajoliz2011_generalized ] describe the fs of rdes ( [ eq : mcgrath2006_generalized ] ) .[ op : mcgrath2006_generalized ] let .describe the fs of the rdes obtained by applying the change to de ( [ op : rhouma_multiplicative ] ) .[ op : rhouma_multiplicative ] let .describe the fs of the rdes obtained by applying the change to de ( [ op : rhouma_linear ] ) .[ op : rhouma_linear ] let a de of order such that its closed form is known .let a change of variables that can be rewriten as a nonautonomous de with coefficients depending of and such that its closed form solution is also known .determine the forbidden set of the de obtained by applying the change of variables to .[ op : rhouma_generalized ] describe the fss of des ( [ eq : palladino_generalized_equation ] ) and ( [ eq : palladino_generalized_equation_order_k ] ) having algebraic invariants .[ op : palladino_generalized ] let and be distint natural numbers greater than . determine the closed form solution and the forbidden set of des admiting an invariant of the form ( [ eq : aghajanishouli_invariant ] ) .[ op : aghajanishouli ] for every rde of order , determine a region such that the forbidden curves in can be given in explicit form , and determine them .[ op : explicitcurves ] find the symbolic description of and for the families of rdes ( [ eq : inverseparabola ] ) and ( [ eq : inverselogistics ] ) .[ op : symbolicdescription ] let an almost local diffeomorphism such that has lebesgue measure zero .suppose that equation ( [ eq : rubiomassegu_map ] ) is globally periodic .obtain sufficient conditions in order that is closed .[ op : rubiomassegu ] let be a rational function and its natural domain .assume that depends effectively on its last variable .then the good set of ( [ eq : rubiomassegu_de ] ) is open if , and only if , the equation is globally periodic .[ conj : rubiomassegu ] to obtain necessary conditions in order to have that an equilibrium point of an autonomous difference equation belongs to the interior of the good set .[ op : rubiomassegu_equilibriumintheinterior ] to obtain necessary conditions in order to have that the good set of a difference equation is open .[ op : rubiomassegu_opengoodset ] let be a monotonic map with pole . complete the results of theorem [ th : sedaghat_monotonicmapwithpole ] , in the sense of describing qualitatively the fs of de when is an increasing map with no fixed point .in particular , answer to the following questions : 1 . in which cases is finite ? 2 .can be dense outside a compact set ?( see ) .[ op : sedaghat_densefs ] determine the fs of de ( [ eq : devaultgalminasjanowskiladas ] ) .[ op : devaultgalminasjanowskiladas ] determine the fs of de ( [ eq : sedaghat_qualitative_1_generalized ] ) by applying theorem [ th : sedaghat_monotonicmapwithpole ] in terms of , , and .[ op : sedaghat_qualitative_1_generalized ] as we have seen throughout the paper , there are a lot of des with closed form solution and explicit forbidden set .there are many ways to generalize them and therefore the literature on this subject is growing with more and more works devoted to that kind of de .although those works have the value to enrich the set of known examples , may be it is time to put some order in the field . we propose to elaborate a database of des and sdes including , among others , the following topics : the forbidden and good sets , the asymptotic behavior , the boundedness of solutions , the relationships with other de or sde and the applied models related with them .this paper has been partially supported by grant mtm2008 - 03679 from ministerio de ciencia e innovacn ( spain ) , project 08667/pi-08 fundacin sneca de la comunidad autnoma de la regin de murcia ( spain ) .+ we thank the anonymous referees for their suggestions and comments . | in recent literature there are an increasing number of papers where the forbidden sets of difference equations are computed . we review and complete different attempts to describe the forbidden set and propose new perspectives for further research and a list of open problems in this field . universidad de murcia , departamento de matemticas + campus de espinardo , 30100 , murcia ( spain ) + balibrea.es , antoniocascales.es |
diverse fields of research including neurophysiology , sociology , geophysics , and economics considers heavy tails and scale - free distributions as the signature of complexity .the brain as a complex dynamic system and brain activities including neuronal membrane potentials , noninvasive electroencephalography ( eeg ) , magnetoencephalography ( meg ) , and functional magnetic resonance imaging ( fmri ) signals observed at many spatiotemporal patterns exhibit power - law behavior . synchronized large amplitude bursts or neural avalanches , are power - law distributed and the widely varying profile of neural avalanche distribution in size and duration are described by a single universal scaling exponent , , in .power law distribution may suggest that neural networks operate near a non - equilibrium critical point where the phase transition occurs .however , power laws are not alone sufficient to establish criticality and also non - critical systems may produce power laws ; implying that many possible mechanisms generate power law distribution . in , authors argue that the power laws observed in avalanche size distributions are actually artifacts , and in an _ in vivo _study , no clear evidence of power - law scaling or self - organized critical states was found in the awake and sleeping brain of mammals in monkey , cat and human . according to the popular view of per bak criticalityis realized spontaneously by complex systems rather than requiring the fine tuning of a control parameter with a critical value .they proposed the term self - organized criticality ( soc ) , as an operation of the system at criticality that generates the power law behavior in natural phenomena .critical point sets a boundary between an ordered and a less ordered state with different scaling behaviors .evidence for criticality in the brain has been found _ in vitro _experiments , animals , and humans . in our previous study , using leaky integrate - and fire model ( lifm ) , we have shown that _ temporal complexity _ is a robust indicator of criticality with which a critical point is introduced .we found that temporal complexity uncontaminated by periodicity occurs in a narrow range of values of the control parameter , in correspondence of which the information transfer from one network to another becomes maximal .we argued that if this enhancement of information transport is interpreted as a signature of criticality , then the power law avalanches are a manifestation of super - criticality rather than criticality .this led us to conclude that if power - law distribution in avalanches is not found in the brain of mammals , that would not conflict with the hypothesis that the brain works at criticality .so how temporal complexity is connected to power - laws ? if we consider an event in a system , then the waiting time distribution densities , , have the time - asymptotic form it was proposed that the cumulative distribution function ( cdf ) is much more accurate to fit the power - law exponents , as well as to identify if the system obeys a power - law .hence the corresponding cumulative distributions , or survival probabilities , is defined as thus the corresponding cdf also behaves as a power - law , but with a smaller exponent . in this work ,we show that by replacing the discrete noise with a continuous gaussian noise in the model , avalanche probability distributions display power law behavior at criticality indicated by temporal complexity and phase transition .the model used in the previous work and here is the popular leaky integrate - and fire model . where is the membrane potential , is the membrane time constant of the neuron , is proportional to a constant input current , is the control parameter and describes the adjacency matrix .note that where is the total number of neurons , is the time of firing of neuron , and is the time lag of firing neuron to the time of receiving the cooperation to the neuron .each neuron starts from a random value or zero and fires when it reaches the threshold . when a neuron fires , it forces all the neurons linked to it to make a step ahead or backward by the quantity according to whether ( excitatory ) or ( inhibitory ) .the parameter plays the all - important role of control parameter and is expected to generate criticality when the special value , to be determined using a suitable criticality indicator , is adopted . after firing , each neuron jumps back to the rest state .when , the vanishing noise condition yields the following expression for the time distance between two consecutive firings of the same neuron if , after a few time steps all the neurons fire at the same time , thereby generating a sequence of quakes of intensity with the time period given by equation .the parameter is the standard deviation of the noise which can also be considered as the noise intensity . in the previous solution of equation, we adopted the integration time step and as a discontinuous random fluctuation taking with equal probability either the value of or . here , however , we treat the time continuously and then consider to be a continuous gaussian white noise with zero mean and unit variance , defined by to reduce the number of the parameters , we define the dimensionless time variable . in this way , setting the last term aside , equation can be rewritten in the following form in which is the dimensionless gaussian noise with zero mean and unit variance . to numerically integrate the stochastic differential equation, we use the ito s interpretaion , which is {\cal t}+ \frac{\sigma}{\sqrt\gamma } \eta_i ( { \cal t } ) \sqrt{d{\cal t}}.\ ] ] here , we choose , , , and .we assume that the neurons are residing on the nodes of a two - dimensional square lattice with periodic boundary condition with the size , where is the linear size of the lattice .the adoption of periodic boundary conditions is to ensure the total equivalence of the cooperating units , so as to avoid the doubt that the onset of firing bursts may be triggered by units with a favorable topology .however , numerical calculations not reported here show that the adoption of periodic boundary condition is not crucial for the results of this paper . here, we show that temporal complexity detects a critical point where it is poised between a random and a regular state .we then examine if avalanche distribution in size and time follow a power - law exponent as found in .using scaling theory that expresses a relation between the exponents , we confirm that neural avalanche data collapses into critical exponent suggested by temporal complexity approach .versus the dimensionless time , for three different cooperation parameters with , , in a lattice .( b ) variation of the power index , and ( c ) scale of stretched exponential , , defined by eq . , with the cooperation parameter .notice the abrupt change in ., title="fig:",height=207 ] versus the dimensionless time , for three different cooperation parameters with , , in a lattice .( b ) variation of the power index , and ( c ) scale of stretched exponential , , defined by eq ., with the cooperation parameter .notice the abrupt change in ., title="fig:",height=207 ] versus the dimensionless time , for three different cooperation parameters with , , in a lattice .( b ) variation of the power index , and ( c ) scale of stretched exponential , , defined by eq . , with the cooperation parameter .notice the abrupt change in ., title="fig:",height=207 ] in this section , we explore the temporal complexity of the lifm , given by , by calculating the cumulative survival probability , , which indicates the probability that no firing occurs up to the time from an earlier firing . in the lack of cooperation ,the dynamics is poissonian process defined by an exponential according to , cooperation generates scale invariance , hence , increasing leads to a transition from the exponential form of equation to at criticality , with so small as to make equation virtually equivalent to the inverse power law of equation over the available time scale .the time dependence of for different values of cooperation parameter is plotted in fig.[fig : f1].(a ) for a lattice of linear size . as shown by fig.[fig : f1].(a ) , we find instead that for small values of , the survival probability is identical to the mittag - leffler ( ml ) function ml function settles a bridge between the stretched exponential and inverse power law survival probabilities as important signs of complexity .in fact , in the time region the ml survival probability is described by the stretched exponential function and in the large time region by a power - law of eq . the index of the stretched exponential function of equation is identical to the power index of the inverse power law of equation .we show that the ml function of equation is the visible manifestation of a hidden survival probability with the inverse power law form of equation , which is thought to be a signature of complexity . , versus noise intensity, , for , and .,height=207 ] fig.[fig : f1].(a ) shows how neural network deviates from exponential to a regular behavior with changing control parameters , . in order to find the critical point, we fit the survival probability functions with ml - function .the corresponding fractal exponents of each curve , , and are plotted in figs .[ fig : f1].(b ) and ( c ) , respectively . as can be seen from the figure, a region represents a phase transitionary behavior that indicate criticality around a critical point of .the model of the current paper is a generalization of the model proposed by mirollo and strogatz in which in the absence of noise , full synchronization is achieved after few steps in the absence of noise .however , here , adding a noise to the model in order to generate temporal complexity along with an adjustable coupling parameter , , a competition is set between these two parameters .we have explored how coupling has to be adjusted to the noise to maintain the criticality and phase transition .hence , we run the model for different values of and arrive at fig . [ fig : f2 ] . as can be inferred from this figure , the critical coupling , approximately has a linear dependence on the noise intensity , and as we increase the noise in the system , cooperation has to level up in order to maintain the system at criticality . in the previous work, we confirmed the criticality by aging experiment and information transfer . here , however , we encourage readers to see for further reading , and focus on neural avalanches . hence , we rely on previous results and take as indicated by fig.[fig : f2 ] ( b and c ) , and ask if avalanche data including avalanche size , duration and temporal profile fall within soc predictions according to ., ( b ) avalanche duration .( c ) scaling of the conditional average of avalanches with duration .the results are obtained using at and the lattices with the linear size ., title="fig:",height=207 ] , ( b ) avalanche duration .( c ) scaling of the conditional average of avalanches with duration .the results are obtained using at and the lattices with the linear size ., title="fig:",height=207 ] , ( b ) avalanche duration .( c ) scaling of the conditional average of avalanches with duration .the results are obtained using at and the lattices with the linear size ., title="fig : " ] in our previous study , we did not observed the coincidence of power - law distribution of neural avalanches at criticality indicated by temporal complexity .this finding may cast doubt on the operation of the brain near criticality ; however , we hypothesized that temporal complexity as a promising detector of criticality may suffice . yet , considering many inconsistent results reported from recordings in awake animals and humans , we did not cease our attempt to find the source of presumptive conflict . in this section , we calculate the size and duration of the neuronal avalanches creating as the results of our lifm . for this purpose , we count the number of firing neurons in time bins of simulation steps ( ) .an avalanche is recorded whenever a burst of neurons is followed by quiescent duration of minimum steps .the number of neurons fired during this active region is called the avalanche size , , and the duration of this activity is called the avalanche duration . as predicted by renormalization group theory ,avalanche data collapses onto universal scaling functions near a critical point and follow : where and are the probability density function of the avalanche size and duration , respectively . is the average of avalanche size conditioned on a given duration . , and are the critical exponents of the system and are independent of the details of the model or system .the scaling theory requires the following relation between the exponents mean field theory predicts and based on soc . the results of our simulation for three lattice sizes , and at are illustrated in fig.[fig : f3 ] . in this figure , panels ( a ) and ( b ) show the scaling of the cumulative probabilities of the avalanche size ( ) and avalanche duration ( ) , respectively , and panel ( c ) is devoted to the scaling of conditional average avalanche size , in terms of the duration . according to equation , and scales as for the critical point suggested by temporal complexity ( for the noise intensity ) ,we found the following exponents : , , and which satisfy the scaling relation , equation .these exponents are in good agreement with the experimental values obtained recently .this is another proof that the critical point suggested by temporal complexity is in fact the system s critical point .it is notable that we have considered throughout the text , and as illustrated in fig .[ fig : f2 ] , increasing noise , , comply increasing , in order to keep these results consistence .here , by applying two modifications to our previous model , we explored if at the critical point indicated by temporal complexity , avalanche data collapses onto universal scaling function as predicted by the theory of dynamic critical phenomena .our results provide compelling evidence that temporal complexity and neural avalanches both are consistent and robust indicators of criticality at which information processing , information storage , dynamic response , and computation is maximized .we emphasize that the exponents of avalanche data collapses on scaling exponents that are model independent and identical for all systems in the same university class . on the other hand ,the results of the paper indicate that a simple model using a regular lattice successfully sheds light into cooperation - induced criticality in neural system and establishes a connection between criticality and neural avalanches .therefore , we conclude that neural avalanches are indicators of criticality .the authors gratefully acknowledge financial support from iranian cognitive science and technologies council through grant no.2960 .j. laherr , d. sornette , eur .j. b * 2 * , 525 ( 1998 ) .t. petermann , t. thiagarajan , m. lebedev , m. nicolelis , d. chialvo , and d. plenz , proc.natl.acad.sci.u.s.a .* 106 * , 15921 ( 2009 ) . c. meisel , a. storch , s. hallmeyer - elgner , e. bullmore , and t. gross ( 2012 ) . ploscomput.biol . * 8 * , e1002312 ( 2012 ) .a. politi a , and s. luccioli , `` _ dynamics of networks of leaky - integrate - and - fire neurons network science _ '' ed e estrada , mfox , dj higham and g - l oppo ( berlin : springer , 2010 ) .note that the ml function is intimately related to the fractional calculus , widely used in different disciplines , ranging from physics to biology and to economics , and consequently is thought to be a important theoretical tool for the field of complexity . | neural avalanches in size and duration exhibit a power law distribution illustrating as a straight line when plotted on the logarithmic scales . the power - law exponent is interpreted as the signature of criticality and it is assumed that the resting brain operates near criticality . however , there is no clear evidence that supports this assumption , and even there are extensive research studies conflicting one another . the model of the current paper is an extension of a previous publication wherein we used an integrate - and - fire model on a regular lattice with periodic boundary conditions and introduced the temporal complexity as a genuine signature of criticality . however , in that model the power - law distribution of neural avalanches were manifestation of super - criticality rather than criticality . here , however , we show that replacing the discrete noise in the model with a gaussian noise and continuous time solution of the equation leads to coincidence of temporal complexity and spatiotemporal patterns of neural avalanches at a control parameter which is assumed to be the critical value of the model . |
here we present an algorithm to compute projections of channels onto exponential families of fixed interactions .the decomposition is geometrical , and it is based on the idea that , rather than joint distributions , the quantities we work with are channels , or conditionals ( or markov kernels , stochastic kernels , transition kernels , stochastic maps ) .our algorithm can be considered a channel version of the iterative scaling of ( joint ) probability distributions , presented in .exponential and mixture families ( of joints and of channels ) have a duality property , shown in section [ fam ] . by fixing some marginals, one determines a mixture family . by fixing ( boltzmann - type ) interactions, one determines an exponential family .these two families intersect in a single point , which means that ( theorem [ dualmk ] ) _ there exists a unique element which has the desired marginals and the desired interactions_. as a consequence , theorem [ dualmk ] translates projections onto exponential families ( which are generally hard to compute ) to projections onto fixed - marginals mixture families ( which can be approximated by an iterative procedure ) .section [ algo ] explains how this is done .projections onto exponential families are becoming more and more important in the definition of measures of statistical interaction , complexity , synergy , and related quantities .in particular , the algorithm can be used to compute decompositions of mutual information , as for example the ones defined in and , and it was indeed used to compute all the numerical examples in .another application of the algorithm is explicit computations of complexity measure as treated in , , and .examples of both applications can be found in section [ applic ] .for all the technical details about the iterative scaling algorithm in its traditional version , we refer the interested reader to chapters 3 and 5 of .all proofs can be found in the appendix .we take the same definitions and notations as in , except that we let the output be multiple .more precisely , we consider a set of input nodes , taking values in the sets , and a set of output nodes , taking values in the sets .we write the input globally as , and the output globally as .we denote by the set of real functions on , and by the set of probability measures on .let and .we call the space of functions which only depend on and : we can model the channel from to as a markov kernel , that assigns to each a probability measure on ( for a detailed treatment , see ) . herewe will consider only finite systems , so we can think of a channel simply as a transition matrix ( or stochastic matrix ) , whose rows sum to one . the space of channels from to will be denoted by .we will denote by and also the corresponding random variables , whenever this does not lead to confusion .conditional probabilities define channels : if and the marginal is strictly positive , then is a well - defined channel .viceversa , if , given we can form a well - defined joint probability : to extend the notion of divergence from probability distributions to channels , we need an `` input distribution '' : let , let . then : let be joint probability distributions on , and let be the kl - divergence . then the following `` chain rule '' holds : we have a family of channels , and a channel that may not be in .then we can define the `` distance '' between and in terms of .let be an input distribution .the divergence between a channel and a family of channels is given by : if the minimum is uniquely realized , we call the channel the _ ri - projection _ of on ( and simply `` an '' ri - projection if it is not unique ). the families considered here are of two types , dual to each other : linear and exponential .for both cases , we take the closures , so that the minima defined above always exist .a _ mixture family _ of is a subset of defined by one or several affine equations , i.e. , the locus of the which satisfy a ( finite ) system of equations in the form : for some functions , and some constants .[ [ example . ] ] example .+ + + + + + + + consider a channel .we can form the marginal : the channels such that form a mixture family , defined by the system of equations ( for all , ) : where the function is equal to 1 for , and zero for any other case .more in general , let be a ( finite - dimensional ) linear subspace of , and let . then : is a mixture family , which we call _ generated by and . a ( closed ) _ exponential family _ of is ( the closure of ) a subset of of channels in the form : where satisfies affine constraints , is fixed , and : so that the channel is correctly normalized .this is a sort of multiplicative equivalent of mixture families , as the exponent satisfies constraints similar to .[ [ example.-1 ] ] example .+ + + + + + + + let be a ( finite - dimensional ) linear subspace of , and let .then the closure : is an exponential family , which again we call _ generated by and . this family is in some sense `` dual '' to the family in . the duality is expressed more precisely by the following result . [ dualmk ] let be a subspace of .let be strictly positive .let be a strictly positive `` reference '' channel .let and . for ,the following conditions are equivalent : 1 .2 . , and .3 . , and . in particular , is unique , and it is exactly .geometrically , we are saying that , the ri - projection of on .we call the mapping the _ ri - projection operator _ , and the mapping the _ i - projection operator _ these are the channel equivalent of the i - projections introduced in and generalized in .the result is illustrated in figure [ fig : dualmk ] . .the point at the intersection minimizes on the distance from , and minimizes on the distance from . ] as suggested by figure [ fig : dualmk ] , i- and ri - projections on exponential families satisfy a pythagoras - type equality .for any , with exponential family : this statement follows directly from the analogous statement for probability distribution found in , after applying the chain rule .the algorithm can be considered as a channel equivalent of the iterative scaling procedure for joint distributions , which can be found in chapter 5 of . translated into our language , that theorem says the following : [ jointit ] let be mixture families of joint distributions with nonempty intersection .denote by the -projection of a joint onto the family .consider the sequence that starts at and is defined iteratively by : then converges , and the limit point is the -projection of onto , i.e. if we call : then , and for any : our result depends on the theorem above , in the following way .we define a marginal procedure for channels , which in general depends on the choice of an input distribution .we define mixture families of channels with fixed marginals in a way compatible with the equivalent for joints .we then define scalings of channels , and prove that they give the desired result at the joint level .this makes it possible to translate the statement of theorem [ jointit ] to an analogous statement for channels , theorem [ convergence ] .unless otherwise stated , all the input distributions here will be assumed strictly positive .all our proofs can be found in the appendix .consider an input distribution .let ,j\subseteq [ m ] , j\ne\emptyset ] , .we define the mixture families as : where the are prescribed channel marginals .analogously , let be a probability distribution in .we define the mixture families : proposition [ jeq ] says that , for any , for any ( strictly positive ) , and for any , j\subseteq [ m] ] .now is an element of , but still _ not _ of .however , if we iterate the operator , the resulting sequence will converge to the projector on .more in general , the following result holds : [ convergence ] for , let ] and ] .take an input distribution and a channel .define the mixture families of prescribed marginals : and their intersection , which is also a mixture family ( nonempty , as it contains at least ) : choose a ( different ) channel and consider the sequence of normalized scalings starting at and defined iteratively by : then : * converges to a limit channel : * the limit is the -projection of on , i.e. and : the proof can be found in the appendix . to apply the theorem [ convergence ] in our algorithm , we choose as initial channel exactly the reference channel of theorem [ dualmk ] , usually the uniform channel . as we take exactly the `` prescription channel '' of theorem [ dualmk ] , i.e. the channel which has the desired marginals .the result of the iterative scaling will be the ri - projection of on the desired exponential family .the algorithm presented here permits to compute the decompositions of mutual information between inputs and outputs in and .we give here examples of computations of _ pairwise synergy _ as an ri - projection for channels , as described in .it is not within the scope of this article to motivate this measure , we rather want to show how it can be computed . let be a channel from to .let be a strictly positive input distribution .we define in the synergy of as : where is the ( closure of the ) family of channels in the form : where : and : according to theorem [ dualmk ] , the ri - projection of on is the unique point of which has all the prescribed marginals : and can therefore be computed by iterative scaling , either of the joint distribution ( as it is traditionally done , see ) , or of the channels ( our algorithm ) . herewe present a comparison of the two algorithms , implemented similarly and in the same language ( mathematica ) .the red dots represent our ( channel ) algorithm , and the blue dots represent the joint rescaling algorithm .for the easiest channels ( see figure [ fig : xor ] ) , both algorithm converge instantly .a more interesting example is a randomly generated channel ( figure [ fig : rand ] ) , in which both method need 5 - 10 iterations to get to the desired value .however , the channel method is slightly faster .the most interesting example is the synergy of the and gate , which should be zero according to the procedure . in that article, we mistakenly wrote a different value , that here we would like to correct ( it is zero ) .the convergence to zero is very slow , of the order of ( figure [ fig : and ] ) .it is clearly again slightly faster for the channel method in terms of iterations . , the joint method ( blue ) proportionally to . ]it has to be noted , however , that rescaling a channel requires more elementary operations than rescaling a joint distribution .because of this , one single iteration with our method takes longer than with the joint method .( as explained in section [ algo ] , a scaling for the channel corresponds to two scalings for the joint . ) in the end , despite the need of fewer iterations , the total computation time of a projection with our algorithm can be longer ( depending on the particular problem ) .for example , again for the synergy of the and gate , we can plot the computation time as a function of the accuracy ( distance to actual value ) , down to .the results are shown in figure [ fig : comp ] . to get to the same accuracy , though, the channel approach used less iterations . in summary ,our algorithm is better in terms of iteration complexity , but generally worse in terms of computing time .iterative scaling can also be used to compute measures of complexity , as defined in , , and in section 6.9 of . for simplicity ,consider two inputs , two outputs and a generic channel between them .in general , any sort of interaction is possible , which in terms of graphical models ( see ) can be represented by diagrams such as those in figure [ fig : graphs1 ] ..5 and are indeed correlated , but only indirectly , via the inputs .b ) the graphical model corresponding to a non - complex system ., title="fig : " ] [ fig : graph2 ] .45 and are indeed correlated , but only indirectly , via the inputs .b ) the graphical model corresponding to a non - complex system ., title="fig : " ] [ fig : gp1 ] any line in the graph indicates an interaction between the nodes .in the outputs are assumed to be conditionally independent , i.e. they do not directly interact ( or , their interaction can be _ explained away _ by conditioning on the inputs ) .in this case the graph looks like figure [ fig : graphs1]a , and the maginals to preserve are those of the family of pairs , with : , , , .suppose now that correspond to at a later time .in this case it is natural to assume that the system is not complex if does not depend ( directly ) on , and does not depend ( directly ) on .intuitively , in this case `` the whole is exactly the sum of its parts '' . in terms of graphical models, this means that our system is represented by figure [ fig : graphs1]b , meaning that the subsets of nodes in question are now only the ones given by , , , .these channels ( or joints ) form an exponential family ( see ) which we call ..5 a , with correlation between the outputs .b ) the non - complex model of figure [ fig : graphs1]b , with correlation between the outputs . , title="fig : " ] [ fig : graph32 ] .45 a , with correlation between the outputs .b ) the non - complex model of figure [ fig : graphs1]b , with correlation between the outputs . , title="fig : " ] [ fig : gp2 ] suppose now , though , that the outputs are not conditionally independent anymore , because of some `` noise '' ( see and ) .this way the interaction structure would look like figure [ fig : graphs3 ] , i.e. the `` complete '' subset given by with and .in particular , a non - complex but `` noisy '' system would be represented by figure [ fig : graphs3]b , and have subsets of nodes given by the pairs , with : , , , , , .such channels form again an exponential family , which we call .we would like now to have a measure of complexity for a channel ( or joint ) . in , the measure of complexityis defined as the divergence from the family represented in figure [ fig : graphs1]b .we will call such a measure . in case of noise , however , it is argued in and that the divergence should be computed from the family represented in [ fig : graphs3]b ( for example , as written in the cited papers , because such a complexity measure should be required to be upper bounded by the mutual information between and ) .we will call such a measure .both divergences can be computed with our algorithm . as an example, we have considered the following channel : with : here represents a node of `` unknown input noise '' that adds correlation between the outputs ( of unknown form ) when if it is not observed .we have chosen and , and a uniform input probability .after marginalizing out ( obtaining then an element of the type of figure [ fig : graphs3]a ) , we can compute the two divergences : * . * .this could indicate that is incorporating part the correlation of the output nodes due to the `` noise '' , and therefore probably overestimating the complexity , at least in this case .one could nevertheless also argue that can underestimate complexity , as we can see in the following `` control '' example .consider the channel : with : which is represented by the graph in figure [ fig : graphs1]a . if the difference between and were just due to the noise , then for our new channel and should be equal .this is not the case : * . * .the divergences are still different .this means that there is an element in , which does _ not _ lie in , for which : the difference is this time smaller , which could mean that noise still does play a role , but in rigor it is hard to say , since none of these quantities is linear , and divergences do not satisfy a triangular inequality .we do not want to argue here in favor or against any of these measures .we would rather like to point out that such considerations can be done mostly after explicit computations , which can be done with iterative scaling .5 csiszr , i. and shields , p. c. . , 1(4):417528 , 2004 goodman , j. . : 916 , 2002 .olbrich , e. , bertschinger , n. , and rauh , j. . , 17(5):35013517 , 2015 perrone , p. and ay , n. ., 35(2 ) , 2016 . ay , n. . ,17 , 24322458 , 2015 .oizumi , m. , tsuchiya , n. , and amari , s. , 2015 .amari , s. springer , 2016 .kakihara , y. .world scientific , 1999 .csiszr , i. ., 3:146158 , 1975 .csiszr , i. and mat , f. ., 49:14741490 , 2003 .amari , s. ., 47(5):17011709 , 2001 .lauritzen , s. l. .oxford , 1996 .williams , p. l. and beer , r. d. . , 2010 .amari , s. and nagaoka , h. . , 1982 .amari , s. and nagaoka , h. .oxford , 1993 . : choose a basis of .define the map , with : and : then : + \operatorname{\mathbb{e}}_p[\log z_\theta]\;.\ ] ] deriving ( where is w.r.t . ) : + \operatorname{\mathbb{e}}_p\left [ \dfrac{\operatorname{\partial}_j z_\theta}{z_\theta } \right]\;.\ ] ] the term in the last brackets is equal to : so that now reads : + \operatorname{\mathbb{e}}_{pk_\theta}[f_j]\;.\ ] ] this quantity is equal to zero for every if and only if . now if is a minimizer , it satisfies , and so .viceversa , suppose , so that it satisfies for every . to prove that it is a global minimizer , we look at the hessian : this is precisely the covariance matrix of the joint probability measure , which is positive definite . : for every , we have : \;. \end{aligned}\ ] ] if , then : = d_p(m||k ' ) + \operatorname{\mathbb{e}}_{pm}\left [ \log\dfrac{k'}{k_0 } \right]\;.\ ] ] by definition of , the logarithm in the last brackets belongs to , and since : = \operatorname{\mathbb{e}}_{pk}\left [ \log\dfrac{k'}{k_0 } \right ] = \operatorname{\mathbb{e}}_{pk'}\left [ \log\dfrac{k'}{k_0 } \right]\;.\ ] ] inserting in : = d_p(m||k ' ) + d_p(k'||k_0)\;.\ ] ] since , shows that is a minimizer . since is strictly convex in the first argument, its minimizer is unique . for in : = \sum_{x , y } p(x)\,k(x;y)\,f(x , y ) = \sum_{x_i , y_j } p(x_i)\,k(x_i;y_j)\,f(x_i;y_j)\;,\ ] ] and just as well : = \sum_{x , y } p(x)\,\bar{k}(x;y)\,f(x , y ) = \sum_{x_i , y_j } p(x_i)\,\bar{k}(x_i;y_j)\,f(x_i;y_j)\;.\ ] ] the definition in ( with strict positivity of ) requires exactly that : =\operatorname{\mathbb{e}}_{p\bar{k}}[f]\ ] ] for every . using and, the equality becomes : for every in , which means that .take as initial distribution and form as in theorem [ jointit ] the sequence of -projections . according to theorem [ jointit] , this sequence converges to the -projection of on . since }(p)$ ] , this projection will have input marginal equal to , and so we can write it as for some uniquely defined channel .we have , for : so in particular , for the subsequence of even - numbered terms also : this subsequence is defined iteratively by : }^{p } \operatorname{\sigma}_{i_jj_j}^{p \bar k } q^{2(j-1)}\;.\ ] ] propositions [ jlevel ] and [ inscale ] imply then that : for every , where is the sequence defined in the statement of theorem [ convergence ]. therefore this sequence converges : since for all , for all because of , which by definition means that .moreover , is the -projection of on , which means that : using the chain rule of the kl - divergence , we get : which means that is the -projection of on . | here we define a procedure for evaluating kl - projections ( i- and ri - projections ) of channels . these can be useful in the decomposition of mutual information between input and outputs , e.g. to quantify synergies and interactions of different orders , as well as information integration and other related measures of complexity . the algorithm is a generalization of the standard iterative scaling algorithm , which we here extend from probability distributions to channels ( also known as transition kernels ) . * keywords : * markov kernels , hierarchy , i - projections , divergences , interactions , iterative scaling , information geometry . |
for the solution of large stiff problems of the type arises from the discretization of unbounded sectorial operators and is a nonlinear function , in recent years much work has been done on the construction of exponential integrators that might represent a promising alternative to classical solvers ( see e.g. or for a comprehensive survey ) . as well known the computation of the matrix exponential or related functions of matrices is at the core of this kind of integrators .the main idea is to damp the stiffness of the problem ( assumed to be contained in ) on these computations so that the integrator can be explicit . under the hypothesisthat the functions of matrices involved are exactly evaluated , the linear stability can be trivially achieved for both runge - kutta and multistep based exponential integrators and hence highly accurate and stable integrators can be constructed . on the other hand ,the main problem with this class of integrators is just the efficient computation of such functions of matrices , so that , very few reliable codes have been written ( we remember the rosenbrock type exponential integrators presented in , , ) .for this reason many authors are still doubtful about the potential of exponential integrators with respect to classical implicit solvers even for semilinear problem of type ( [ pr ] ) .an exponential integrator requires at each time step the evaluation of a certain number ( depending on the accuracy ) of functions of matrices of the type , where being the time step .actually this represents the general situation for the exponential time differencing methods , that is , the methods based on the variation - of - constants formula ; for lawson s type method ( also called integrating factor methods ) only the matrix exponential is involved .we refer again to and the reference therein for a background . among the existing techniques for the computation of functions of matrices( we quote here the recent book of higham for a survey ) , in this context the restricted - denominator ( rd ) rational arnoldi algorithm introduced independently in and for the computation of the matrix exponential seems to be an reliable approach .it is based on the use of the so called rd rational forms , studied in for the exponential function , is a polynomial of degree we refer again to morno for the basic references about the properties and the use of such rational forms . while in the matrix case , the use of these approximants requires the solution of linear systems with the matrix , as shown in in the context of the solution of ( [ pr ] ) when is sectorial so typically sparse and well structured this linear algebra drawback can be almost completely overtaken organizing suitably the step - size control strategy and exploiting the properties of the rd arnoldi method concerning the choice of the parameter . in other words the number of linear systems to be solved can be drastically reduced with respect to the total number of computations of functions of matrices required by the integrator .therefore the mesh independence property of the method , that leads to a very fast convergence with respect to a standard polynomial approach ( see again ) , can be fully exploited for the construction of competitive integrators .a problem still open is that inside the integrator the rational arnoldi algorithm ( responsible for most of the computational cost ) have to be supported by a robust and sharp error estimator . in the self - adjoint casethe problem has been treated in where the author presents effective a - posteriori error estimates , even in absence of information on the location of the spectrum of .anyway , in the general case , when ( pr ) arises for instance from the discretization of parabolic problems with advection terms and/or non - zero boundary conditions the numerical range of , that we denote by , may not reduce to a line segment . in this sensethe basic aim of this paper is to fill this gap providing error estimates for the non - symmetric case using as few as possible information about the location of .it is necessary to keep in mind that a competitive code for ( [ pr ] ) should also be able to update ( interpreted as the jacobian of , , ) so that is may be not fixed during the integration , and so it is important to reduce as much as possible any pre - processing technique to estimate .in particular assuming that shall provide a - posteriori error estimates for the rd arnoldi process using only information about the angle of the sector containing , angle that is typically independent of the sharpness of the discretization and hence computable working in small dimension .the paper is organized as follows . in section 2we present the basic idea of the rd rational arnoldi method and in section 3 we derive some first general error bounds based on the standard approaches . in section 4 , exploiting the relation between the derivatives of the function and the laguerre polynomials extended to the complex plane , we derive some a - posteriori error bounds .the problem of defining reliable a - priori bounds is investigated in section 5 .section 6 is devoted to the analysis of the generalized residual as error estimator , that can be used to obtain information about the choice of the parameter for the rational approximation . in section 7we present some numerical examples arising from the discretization of a one - dimensional advection - diffusion model . in section 8we provide some hints about the use of the rd rational arnoldi method inside an exponential integrator with the aim of reducing as much as possible the number of implicit computations of .finally , in section 9 we furnish a deeper analysis concerning the fast rate of convergence of the method , that will provide further information about the optimal choice of the parameter .in what follows we denote by the euclidean vector norm and its induced matrix norm . as already mentioned , the notation indicates the _ numerical range _ of , that is , the spectrum of is denoted by . the notation indicates the space of the algebraic polynomials of degree . given ,let the unbounded sector of the left half complex plane , symmetric with respect to the real axis with vertex in and semiangle .let moreover be the boundary of . throughout the paperwe assume that , the interior of .accordingly , is a so - called sectorial operator ( see e.g. chap .v , for a background ) . given a vector , with ,consider the problem of computing is defined by ( [ defi ] ) .the rd rational approach seeks for approximations to of the type is a suitable parameter .turning to the matrix case , is approximated by elements of the krylov subspaces with respect to and the matrix defined by the transform in this sense the idea is to use a polynomial method to compute , where singular at .for the construction of the subspaces we employ the classical arnoldi method .as is well known it generates an orthonormal sequence , with , such that .moreover , for every , ] .formula ( [ topt ] ) obviously requires to know the number of iterations that are necessary to achieve a certain accuracy . in this sensewe need to bound in some way . by ( [ try ] ) and since each monic polynomial of exact degree ( see p. 269 ) , a bound for can be stated using faber polynomials as explained in , that leads to is the logarithmic capacity of a compact and where is analytic .[ pap]let and assume that , with . then for since by proposition [ pint ] , let us consider the compact subset .the associated conformal mapping given by .the coefficient of the leading term of the laurent expansion ( [ le ] ) is the logarithmic capacity , so that by ( be ) we have this bound in ( [ fe2 ] ) we easily obtain for the second inequality arises from the choice .now , by the definition ( [ cj ] ) , it is rather easy to show that that ^{m}. \label{app}\]]since for , and since for each proof is complete. proposition [ pap ] shows the mesh - independence of the method for since the bound ( [ ap ] ) is independent of the discretization of the underlying sectorial operator . in section 9 this considarationis extended to . by ( [ app ] ) and ( [ rot ] ) , in the self - adjoint case ( ) the bound ( [ ap ] ) reads it is worth noting that by ( [ fe ] ) for every we have that we assume that is compact , connected , with associated conformal mapping , and such that .therefore , in principle , one could try to derive a - priori error bounds choosing suitably the polynomial sequence .anyway , the classical results in complex polynomial approximation state that even taking as a sequence of polynomials that asymptotically behaves as the sequence of polynomial of best uniform approximation of on ( see e.g for a theoretical background and examples ) we have ^{1/m}\rightarrow \frac{1}{r}\text{\quad as } m\rightarrow \infty , \]]where is such that , since is singular at 0 ( _ maximal convergence _ property , see e.g chapter iv ) .the main problem is that assuming to be unbounded , and consequently . for this reasons , in our opinionthe only reasonable approach to derive a - priori error bounds , is to define as a sequence of polynomials interpolating at point belonging to , and then to use the hermite - genocchi formula to bound the divided differences . using this formula and taking for instance as the sequence of interpolants at the zeros of faber polynomialswe just obtain the error bound given in proposition [ pap ] ( see ) .by the integral representation of function of matrices and ( [ kn ] ) , we know that the error can be written as. \label{fot1}\]]in order to monitor the approximations during the computation we can consider the so - called generalized residual , defined as is obtained from ( [ fot1 ] ) by replacing the error the corresponding residual the fundamental relation ( [ i1 ] ) we have immediately inserting this relation in ( [ res ] ) we obtain that we may assume in order to show the reliability of this approximation let us consider the operator discretized with central differences in ] such that for the number of iterations necessary to achieve the same tolerance is at most ( ) . using ( [ xp ] ) and the approximation ( ) that is obtained forcing the equal sign in the a - priori bound ( [ bh ] ) , in figure 4 we can observe the result for . for each corresponding extremal points and of the intervals and are plotted .these points are obtained solving with respect to the equation ( cf .( [ xp])) .we point out that the results are even a bit conservative with respect to what happens in practice , and this is due to the approximation .indeed larger intervals would be obtained taking as it occurs in practice . inorder prove the effectiveness of the above considerations let us consider again the operator ( [ lu ] ) with the usual discretization .we consider the case , for . to define consider again the discretization with interior points observing the generalized residual .this leads us to define with . in figure 5we consider the behavior of the method for , and .the robustness of the method with respect to the choice of is maybe the most important aspect concerning its use inside an exponential integrator .we want to give here some practical suggestions assuming to use a sparse factorization technique to solve the linear systems with , that , computationally , has to be considered the heaviest part of the method . 1 .working in much smaller dimension compute and use the generalized residual to estimate the initial .2 . for nonlinear problems , interpreting as the jacobian of the system ( , ) , it is necessary to introduce some strategies in order to reduce as much as possible the number of updates of during the integration , since each update would also require to update the factorization . as for exponential w - method ( see , ) , we suggest , whenever it is possible , to work with a time - lagged jacobian and hence to introduce the necessary order conditions in order to preserve the theoretical order .3 . using a quasi - constant step - size strategy ( without jacobian update ) allows to keep the factorization of constant for a certain number of steps . whenever it is necessary to update the stepsize without changing the jacobian , if we want to keep the previous factorization of we just need to consider the ratio . if ( indicatively ) it is bigger than or smaller than ( cf . figure 4 and 5 ) , where arises from a previous analysis of the generalized residual , then we need to update the factorization ( cf . again ) , otherwise we can keep the previous one . in this phase , however , one can even considers other strategies to define suitably the window of admissible values of around , taking into account of the local accuracy required by the integrator , the norm of , etc .looking carefully at figure 5 we notice that while the analysis in smaller dimension suggested to take for reaching the desired tolerance in exactly iterations the method is unexpectedly a bit faster taking ( second picture ) .the analysis was correct because in larger dimension the method actually achieves the tolerance in iterations ( first picture ) . in order to understand the reason of this behavior , we need to remember that the definition of given at the end of section 4 was based on the assumption that is independent of but this is not true . in what follows we try to provide a more accurate analysis studying the decay of .we denote by , , the singular values of .moreover we denote by , the eigenvalues of and assume that for .we have the following result ( cf .ne theorem 5.8.10 ) .[ nev]assume that and .then as already shown in section 4 each monic polynomial of exact degree ( see p. 269 ) , so that theorem [ nev ] reveals that the rate of decay of is superlinear and depends on the -summability of the singular values of .we remark moreover that an almost equal bound has been obtained in studying the convergence of the smallest ritz value of the lanczos process for self - adjoint compact operators . in practice, the use of ( [ bn ] ) requires the knowledge of and a bound for , that is , information about the singular values of the operator . as a model problemwe consider again the operator defined by ( lu ) with , whose eigenvalues are , , so that the eigenvalues of are given by . in this case( [ psum ] ) holds for so that can be referred to as a _ trace class _ operator ( see again ) .hence , taking for instance we have so bound ( [ pmh ] ) reveals that the rate of decay depends on the choice of and then on . for large values of ,say , the bound ( [ tb ] ) can be heavily improved exploiting the properties of the function and the convergence is extremely fast .the following proposition states a general superlinear bound that can be used when is an elliptic differential operator of the second order , so with singular values growing like .the proof is straightforward since we just require to bound , and apply ( [ bn ] ) with .let be an elliptic differential operator of the second order .then there exists a constant such that this proposition can easily be generalized to operator of order , exploiting corollary 5.8.12 in which the author extends theorem [ nev ] for . anyway , this is beyond the purpose of this section . from a practical point of view ,formula ( [ hh ] ) is almost useless since too much information on would be required . on the other side ,it is fundamental to understand the dependence on .setting as usual and putting the corresponding bound ( [ hh ] ) in theorem [ pro1 ] ( formula ( [ fe2 ] ) ) , we easily find that the theoretical optimal value for is obtained seeking for the minimum of respect to , that is, new value , less than , explains our considerations about figure 5 given at the beginning of this section .we need to point out that since the choice of is independent of and , formula ( [ hh ] ) is quite coarse for small values of and not able to catch the fast decay of . in any case , if an estimate of is available an a - priori bound for the rd arnoldi method can be obtained taking(cf .( [ bh ] ) ) .consequently we argue that close to for large and to for small .in this paper we have tried to provide all the necessary information to employ the rd arnoldi method as a tool for solving parabolic problems with exponential integrators . the little number of codes available in literature , and consequently , the little number of comparisons with classical solvers is a source of skepticism about the practical usefulness of this kind of integrators . indeed , with respect to the most powerful classical methods for stiff problems , the computation of a large number of matrix functions , generally performed with a polynomial method , is still representing a drawback because of the computational cost .the use of polynomial methods for these computations may even be considered inadequate whenever we assume to work with an arbitrarily sharp discretization of the operator , since this would result in a problem of polynomial approximation in arbitrarily large domains . for these reasons , the use of rational approximations as the one here presented ,should be considered a valid alternative because of the fast rate of convergence and the mesh independence property , provided that we are able to exploit suitably the robustness of the method with respect to the choice of the poles , as explained in section 8 for our case | in this paper we investigate some practical aspects concerning the use of the restricted - denominator ( rd ) rational arnoldi method for the computation of the core functions of exponential integrators for parabolic problems . we derive some useful a - posteriori bounds together with some hints for a suitable implementation inside the integrators . numerical experiments arising from the discretization of sectorial operators are presented . |
web based deals offering deep discounts to a group of online buyers on products and services is a fast growing market .group - buying deals attract new customers as well as guarantee customer traffic within a stipulated expiry date for local businesses like restaurants and tour operators .most of these group - buying deals are sold by intermediaries like gropon , groupbuy and many other daily deal providers .though these intermediaries depended on email based marketing models in the past , banner ads in social networking and other sites are increasingly used to attract deal customers . unlike the traditional ads ,group - buying intermediaries receive their payment only upon satisfying the minimum number of conversions before the deal expiry ( i.e. if the deal tips ) .this implies that if the deal does not tip , advertiser loses the amount used to buy impressions , and receives no payment .if the advertiser fulfills or exceeds the guarantee , he receives a payment equal to the product of number of conversions and pay per conversions similar to the traditional ads .this model is used by popular group - buying advertisers like groupon , and groupbuy among many other deal providers . though most of these deals tips for sites like groupon in current email based marketing , tipping the deals will get harder with increasing competition to attract business owners and shift to the display - ad based marketing .the proposed strategy enables the deal advertisers to offer more aggressive tipping points , hence more volume of sales to merchants .further , this model is easy scale to other forms of group - buying campaigns like penalties for not meeting tipping similar to guaranteed display ads . to maximize the profits while bidding for group - buy ads , bidders have to minimize cost by bidding low , but still have to win sufficient number of conversions to satisfy guarantees before the deal expiry .bidding high increases the probability of winning impressions thereby improves the chance of the deal tipping . on the contrary , higher bids increase the payment to the exchange thereby reducing the profit .hence bids need to be optimized considering these two conflicting pulls .this maximal profit bidding necessitates dynamic bid optimization based on the time to expiry and the number of received conversions .we address this problem of maximizing deal bidder profits , by real - time optimization of bids to minimize the cost of impressions while satisfying the deal tipping guarantees . for group - buying deals , the traditional static bidding strategies based on optimization of expected profits of a single impression is far from optimal .a significant difference from the traditional ads is that the optimal bid value depends on the time to expiry and number of more conversions required to satisfy the guarantees .for example , consider a deal requiring just a few more conversions to fulfill the guarantee .if the deal is about to expire the advertiser would have to bid higher amounts to increase the probability of winning more impressions . on the other hand , if the time to expiry is long for the same deal , he would better off bidding smaller amounts winning fewer fraction of impressions to minimize the payment to the exchange ( since there would be higher number of user visits in larger time intervals ) . evidently , the optimum bid amount is a function of the time dependent parameters like the time to expiry and the additional number of conversions needed to satisfy guarantees . due to this time dependence of optimal bids ,any static bidding strategy will be sub - optimal , necessitating real - time bidding .fortunately , this dynamic bid optimization is made possible by the advent of ad exchanges offering real - time auctions ( e.g. realmedia , doubleclick , adecn ) . since the revenue is conditional upon tipping the deal, the bidding strategies are significantly harder than the traditional non - guaranteed bidding .in addition to the dynamic quantities mentioned above , deal profit depends on a number of static quantities : pay per event , number and bid distributions of other bidders , conversion rates , and the auction mechanism .consequently , formulating and maximizing expected profits which is a function of all these static and dynamic quantities is significantly harder . adding to the complexity , the optimization is online necessitating low computation timings . our method of optimizing profit for guaranteed deals has two steps : ( i ) formulating the expected profit ( ii ) maximizing the profit against the bid .for the first step , we derive the expected profit as a function of the bid value , time to expiry , fulfilled conversions , amount spent to buy impressions , auction mechanism , click through rate and the number and distribution of the other bidders . since many of these parameters are dynamic as described above , the objective function value changes as the bidding progresses .among all these parameters , the only parameter the bidder can change is his own bid amount .hence we optimize the expected profit against the bid amount in the second step .when the profits are optimum , the deal bids are in a symmetric bayesian nash equilibrium similar to the traditional ads . considering the complexity of the optimization , a closed form solution is unlikely .though the optimization is against a single variable ( i.e. bid amount ) , our analysis shows that the objective function is neither convex nor quasi - convex ( unimodal ) .consequently , an optimization method guaranteed to converge to optimal bids on every instance is unlikely .further , the derivative of the objective function is harder to solve than the objective itself .considering these factors , we resort to direct numerical optimization ( without using gradients ) starting from multiple points . *running time minimization : * since the optimization is online , computation time needs to be minimized .therefore we explore running time optimization in multiple levels .firstly , we use a fast converging brent s optimizer .secondly , we reformulate the objective for faster computation for typical parameter values .further , we approximate large binomial cumulative probability expressions with a single term normal approximation .since the changes in the optimal bids for subsequent impressions are incremental , we reuse optimal bid values of previous impressions whenever changes are likely to be negligible .* extensions : * interestingly , the solutions of many related problems can be directly derived from the proposed objective function .we describe the four proposed extensions below : ( i ) deal selection : : deal selection chooses the best deals to bid to maximize the bidder profits . combining optimal bidding and selection, we derive the bidders private value and the marginal profit increase for the impression for each deal .the deal with the highest marginal profit increase is the greedy optimal selection .( ii ) deal admissibility : : admissibility is the problem of predicting whether bidding for a group - buying deal is likely to be profitable based on its attributes . the intermediary or the advertiser may decide to accept or reject a deal campaign based on admissibility criterion . we show that a special case of our objective function combined with the bid optimization provides effective admission control . (iii ) non - bidding selection : : for non - bidding scenarios like the publisher directly selecting the deals to display , the proposed formulation suggests optimal selection among the inventory of deals .( iv ) non - guaranteed ads : : we show that the real time bid optimization of traditional non - guaranteed ads is a special case of the proposed optimization .when there are no guarantees , the proposed objective function reduces to expected profits of traditional ads , yielding known optimal static bid formulations .thus the method serves as a unified real time bidding strategy for both guaranteed and non - guaranteed ads .* evaluations and results : * we evaluate the proposed methods and the extensions in a query log of size 9.3 million impressions of 935 ads . in our first set of experiments ,we compare our profits of the proposed real time strategy with the optimal static and base adaptive baselines .the results show that the proposed strategy improves the profits over the baselines significantly .subsequently , experiments showing improved profits in spite of violated assumptions of the competitor bids demonstrate robustness of the strategy .further , our running time evaluations demonstrate acceptable optimization timings .finally , our evaluations of the ad selection and admissibility demonstrate that the extensions improve profits significantly over the baselines .rest of the paper is organized as follows .the next section discusses related work , followed by section on notations and the formal problem definition .section [ sec - maximizingprofit ] derives the expected profits and proposes the optimization method .subsequently , we discuss running time minimizations .next section presents extensions of the problem to deal selection , admissibility , and bidding of traditional ads . section [ sec - evaluations ] present the experimental evaluations and results .finally we present our conclusions in section [ sec - conclusions ] .grabchak _ et al . _ addressed the problem of optimal selection of guaranteed ( group buying ) ads .our work is different , since we deal with optimal bidding , whereas grabchak __ does not consider the bidding and consider offline selection of deals .further , even the non - bidding selection sub - problem discussed in this paper is different since we consider a minimum number of conversions like deals , whereas grabchak __ consider an exact number of required conversions . different models of group - buying auctions and bidding mechanisms has been studied .but our problem of bidding to sell deals online mostly made popular after the emergence of dail - deal sites has not been studied for any of the group - buying auction models .considering related problems of allocation and bidding of display ads , ghosh _ et al . _ considered allocating guaranteed display impressions matching a quality distribution representative of the market .vee _ et al . _ analyzed the problem of optimal online matching with access to random future samples .boutilier _ et al . _ introduced an auction mechanism for real time bidding of display ads .there are a number of papers on optimal ranking of textual ads in presence of budget limits .mehta _ et al . _ deal with the problem of optimal allocation of textual ads considering budget limits of the advertisers .buchbindar _ et al . _ provided a simpler primal - dual based analysis achieving the same competitive ratio .these papers consider ranking / allocation of textual ads than deals .further these problems have an upper limit on number of impressions , rather than a lower limit as in our problem .hence , unlike these problems , ours is not a generalized online bipartite matching . with the increase of ad exchanges offering real - time bidding, there are a few papers on related problems .chen _ et al . _ formulated the problem of supply side allocation of traditional ads with upper bounds on budgets as an online constrained optimization matching problem .chakraborty _ et al . _ considered the problem of ad exchanges calling out a subset of ad - networks without exceeding capacity of individual networks for real time bidding . to the best of our knowledge ,the optimal bidding problem of group - buy deals and the extensions have not been addressed .every group - buy deal has a required minimum clicks , an expiry time , a cost per click ( cpc ) , and a click through rate ( ctr ) .thus a deal may be represented as , for the rest of the paper our discussions are based on guaranteed number of clicks for the ease of description .the discussions and results are equally applicable for guaranteed conversions ( refer to section [ sec - gnerelizations ] for the details ) ( by substituting conversion rates ( cvr ) for click through rates and click per action ( cpa ) for cpc ) and guaranteed displays ( by setting click through rate to one and substituting cost per impression ( cpi ) for ctr ) .let be a binary indicator variable , with if the advertiser s bid is successful at time .e .he wins the bid for impressions and pays the content owner and zero otherwise .let be the number of clicks at time . for our discussions ,the time denotes user visits ( impression opportunities ) rather than wall clock time . for a deal the profit at time is , where is a mapping from bids to the payment whose closed form depends on the auction model , the number of other bidders and the bid distributions . forthe commonly used first price auction for display ads . for other auctions like second price auction, closed forms can be derived based on order statistics . after fulfilling guarantees( i.e. ) the expected profit function for the guaranteed deals are the same as that of the traditional non - guaranteed ads .hence , the period of interest for our analysis and experiments is the time before guarantees are fulfilled . to maximize the profit in equation [ eqn - profitforumuation ] ,the only parameter decided by the bidder is the bid amount .hence we may state the profit maximization problem as , * bidding problem : * _ given a guaranteed ad , and number of received conversions , find the bid amount such that the expected profit from user visits is maximal , where is the expected number of user visits before the ad expiry time . _to explain the nature of the problem , we start by finding the optimal bid based on the expected values of parameters at .this is the best possible estimate at that point of time .as time progresses , we will get better estimates of parameters based on the actual values of number of conversions , and user visits so far .hence we keep updating the optimal bid based on the current state and expected numbers in the future .we assume that is known , as it can be generally estimated from the traffic statistics .we derive the expected profits of group - buy deal campaigns based on the current state of the deal .subsequently we analyze the nature of the the profit - function , and present a method to maximize the profits in real - time by bid adjustments .the click probability of a deal is , the first factor is equal to the ctr of the deal is a constant for static auctions considered here . the second factor probability of winning impression an increasing function of the bid amount .this implies that the probability of satisfying click guarantees , and consequently the expected profit increase with the bid amount . on the contrary ,the amount paid by the bidder to the publisher ( ) is an increasing function of the bid amount .hence the profit tends to decrease with increasing bid amount .the bids need to be optimized considering these two conflicting effects on the profit . for real - time bidding , different advertisers or intermediaries place bids for a given ad impression .generally the highest bidder wins , and will display his ad . in general bid values of a bidder varies , either due to the bidder s private value distribution , or due to a deliberate randomization done by the bidder to avoid giving advantage to the competition .hence , the event of winning is probabilistic , with a binary outcome .further , winning in consecutive bids can be assumed to be independent of each other . hence bidding to win impressions are bernoulli trials with success probability increasing with the bid amount .the users click with probabilities equal to the estimated ctr of the winning ad .this is again a bernoulli trial with success probability equal to the ctr .hence these two trials bidding and getting conversions may be combined as a single bernoulli trial of bidding to win clicks .the probability of success for this composite trial is equal to the product of ctr and probability of winning an impression . for composite bernoulli trial described above, the number of successes follows a binomial distribution .to facilitate representing such a binomial distribution , we introduce the following two functions , where is the ctr of the ad , and is the additional number of clicks required to satisfy the guarantees . function is a mapping from the bid value to the probability of winning the impression . for a sealed bid auctionin which the highest bid wins ( e.g. first or second price auctions ) , this probability is , where is the cumulative probability distribution of the bids of other bidders , and is the total number of bidders . to get a closed form of we need to assume a distribution function of bids .for example , if the bids are uniformly distributed between and , .similar closed forms can be derived for other distributions , and even for cases where different competitors following different distributions . at optimal profits ,the bids are the best responses to competitors and hence are in a symmetric bayesian - nash equilibrium .consequently , we may limit our analysis to truthful bidding without the loss of generality as stated by the revelation principle .hence the assumptions on bid distributions above are equivalent to the same assumptions on private value distributions of bidders at the optimal profit outcomes . now the net expected profits is given by the objective function , please refer to appendix [ appendix - secexpectedprofitderivation ] for the derivation of the expected profits .the expected profit in equation [ eqn - expectedprofitsingledeal ] has to be optimized with respect to the bid amount .an option is to differentiate the function with respect to and solve the derivative for zero .but this is hard since the derivative may have large number of terms , and solving the derivative will be harder than a direct approach .hence a direct optimization of the objective function as we do below is faster .an example curve of variation of the objective function with bids is shown in figure [ fig - objectivenonconvex ] .two observations significant to the numerical optimization are ( i ) the optimization is non - convex .( ii ) the function is not even quasi - convex ( unimodal ) .this implies that a bisection or gradient descent method may get trapped in a local optimum , and hence the convergence to the global optima is not guaranteed .consequently , we need to start the optimization from multiple points making the problem harder . for the bidding process, the winning probability is one if the bid is greater than the maximum bid of the competitors bid distribution ; and zero for bids less than the minimum bids . hence the optimal values will always be between the maximum and minimum even without imposing external constraints .this allows a simpler unconstrained optimization .the optimizer restarts the search from multiple random starting points to avoid local minima traps ( the details of the restarts are discussed in the section [ subsec - minimizingrunning ] ) .further since the optimization is online , fast - convergence is highly desirable . considering these factors , we adapt brent s optimization method .brent s optimization combines parabolic interpolation with golden ratio search for faster convergence .if the parabolic interpolation fails , the search falls back to the golden ratio search . , , , , and single competitor with uniform random bid in ] .pay per click ( ) is set to ] by the rt and static bidders ., title="fig:",width=291,height=226 ] the next sets of experiment in figure [ fig - normal - robust ] further relax assumptions on competing bidders . the group ( g ) has competing bidders having gaussian bid distributions instead of the uniform random distribution in the previous experiments ; and group ( r ) evaluates the robustness of rt bidding against violation of assumptions . like the uniform distribution , for the gaussian experiments in group ( g ) as well , the rt bidder outperforms the competitors .the profits are higher than the uniform distributions in figure [ fig - biddingprofitbasic - a ] group ( 10 ) , since the lower entropy of gaussian distribution is easier to optimize against . for the robustness experimentsif group ( r ) , the rt and static bidders assume uniform distribution in ] .for every deal , the require clicks is set uniform random between zero and the maximum value shown in the -axis to have different required clicks values for different deals . for selecting the best ad in the group for an impression, first the bids are optimized using the real time bidder .these optimal bids are used in equation [ eqn - expectedtruevalue ] for real time selection , and to compute private value ( i.e. ) of the static selection . to separate improvement in profit by selection from improvement by bidding, the proposed rt bidder is used for bidding after both the static and rt selections .the mean realized profits are shown in figure [ fig - extensions - a ] . when the required clicks is zero , the real time bidding gives the same optimal profit as the static bidding ( keep in mind that the static selection is optimal when require clicks is zero ) . for higher values of required clicks ,the real time bidder gives considerably higher profits , with percentage of increase in profit increasing with required clicks .the profit swings at larger values of required clicks is due to the random factors in assignment of required clicks .the admission control proposed in section [ subsec - admssibility ] is evaluated by comparing mean deal profits against the profits without admission control .similar to selection , bids are optimized and substituted in equation [ eqn - admissibility ] .the deals giving positive expected profits are passed to the bidder , and mean profits are plotted .figure [ fig - extensions - b ] shows that admission control improves profits by more than six times for some values of required clicks .the profit increase for both static and rt bidders , showing effectiveness of admission control independent of the bidding method ( like the previous experiments rt bidder performs considerably better than static ) . at the ads in the click log have positive expected profits .the mean profit no longer decreases monotonically , as the admission control eliminate more low profit ads with increased .further the total profits of rt bidder with and without admissibility is almost exactly the same .this shows that there are no false negatives removed .the admission control does not increase the total profits , because for the ads with negative expected profits , the real time bidder will bid zero making the losses to zero .with admissibility , the bidder incur the same profit from much lesser number of ads and user visits , hence he can use the remaining user visits to sell other ads .an emerging category of the online ads are the group - buy deals requiring minimum number of purchases . for an advertiser or intermediary selling these deals ,optimizing bids is necessary for maximal profits .existing bidding strategies are sub - optimal for these deals , as they do not consider event minimum group - size guarantees and expiry timings . to this end, we propose a real time bidding strategy for guaranteed deals .we derive the expected profits as a function of the dynamic and static parameters of the deals .these expected profits are shown to be non - convex , and numerically optimized against the bid values . to satisfy the stringent time constraints of online bidding, we use several approximations and running time optimizations . exploiting the generality of the proposed formulation ,we extend the solution to related problems of deal selection for bidding , admissibility , selection for non - bidding scenarios and real time bidding of non - guaranteed ads . our empirical comparisons with base adaptive and the existing static strategies on a multi - million click log show significant profit improvements .further our evaluations show acceptable running time and robustness against the violation of assumptions .evaluations of extensions show considerable profit improvement by the proposed deal selection and admissibility .10 daily deals rescue local - ad market , wall street journal , june 14 , 2011 .k. anand and r. aron .group buying on the web : a comparison of price - discovery mechanisms ., pages 15461562 , 2003 . c. boutilier , d. parkes , t. sandholm , and w. walsh .expressive banner ad auctions and model - based online optimization for clearing . in _ proceedings of aaai _ , 2008 .n. buchbinder , k. jain , and j. naor .online primal - dual algorithms for maximizing ad - auctions revenue . , pages 253264 , 2007 .t. chakraborty , e. even - dar , s. guha , y. mansour , and s. muthukrishnan .selective call out and real time bidding . , 2010 .j. chen , x. chen , and x. song .bidder s strategy under group - buying auction on the internet ., 32(6):680690 , 2002 .y. chen , p. berkhin , b. anderson , and n. devanur .real - time bidding algorithms for performance - based display ad allocation . in _ proceedings of kdd _ , 2011 .p. dasgupta , p. hammond , and e. maskin .the implementation of social choice rules : some general results on incentive compatibility ., 46(2):185216 , 1979 .d. easley and j. kleinberg . .cambridge univ pr , 2010 .a. ghosh , p. mcafee , k. papineni , and s. vassilvitskii .bidding for representative allocations for display advertising ., pages 208219 , 2009 .m. grabchak , n. bhamidipati , r. bhatt , and d. garg .adaptive policies for selecting groupon style chunked reward ads in a stochastic knapsack framework . in_ proceedings of www _ , pages 167176 .acm , 2011 .v. krishna . . academic press , 2009a. mehta , a. saberi , u. vazirani , and v. vazirani .adwords and generalized online matching ., 54(5):22es , 2007 .m. richardson , e. dominowska , and r. ragno .predicting clicks : estimating the click - through rate for new ads . in _ proceedings of www _, pages 521530 , 2007 .e. vee , s. vassilvitskii , and j. shanmugasundaram .optimal online assignment with forecasts . in _ proceedings of ec _ , pages 109118 .acm , 2010 .* cost : * at time an amount equal to is paid for the impressions .the future expected cost is the expected payment till .let denotes the total number of displays till , these conditional expectations can be expanded as , \nonumber \\ & = & \sum_{j=1}^{u_t } j \left[\frac{p((d = j)\wedge g)}{p(g ) } p(g ) + \right .\nonumber \\ & & \left .p ( \neg g ) \frac{p((d = j)\wedge \neg g)}{p(\neg g)}\right ] \nonumber \\ & = & \sum_{j=1}^{u_t } j \left[p\left((d = j)\wedge g \right ) + p\left ( ( d = j)\wedge \neg g \right)\right ] \nonumber \\ & = & \sum_{j=1}^{u_t } j p(d = j ) \nonumber\end{aligned}\ ] ] since number of impressions follows a binomial distribution with success probability equal to probability of display , .hence the expected cost is * revenue : * revenue is conditional on g , as revue in the event of is zero . at time , total expected revenue is the sum of revenues of already realized clicks and the revenue of the expected clcisk till .let denotes the future expected revenue till , let denotes the number of clicks till , total expected revenue is . as the experiments are bernaulli trials with success ( conversion ) probability of , on displaying the ad , the ad may get conversioned with a probability equal to , and will not be conversioned with a probability equal to .hence the expected change in profit given a display is ( we ignore the minute possible change in optimal bid in a single display ) is , + ( 1-\mu)\rho_i \left[c_t \phi ( r_t , u_t-1,b_t,\mu)+ \right . \nonumber \\ & & \!\!\!\!\!\!\!\!\!\!\left .\theta ( r_t , u_t-1,b_t,\mu ) \right]- \left(\sum_{j=1}^{t-1}\psi_jb_j + b_t+ p_d ( u_t-1)b_t \right ) \nonumber\end{aligned}\ ] ] substituting these values in equation [ eqn - expectedvalueincrease ] we get as , \nonumber \\ & = & \mu \rho_i \left[c_t \left(\begin{array}{c } u_t-1\\ r_t -1 \end{array}\right ) ( \mu p_d)^{r_t-1 } ( 1-\mu p_d)^{(u_t - r_t)}+ \right .\nonumber \\ & & \phi ( r_t-1 , u_t-1,b_t,\mu ) + \nonumber \\ & & \left .( r_t -1 ) \left(\begin{array}{c } u_t-1 \\ r_t -1 \end{array}\right ) ( \mu p_d)^{r_t-1 } ( 1-\mu p_d)^{(u_t - r_t ) } \right ] \nonumber \\ & = & \mu \rho_i \left[(c_t+ r_t -1)\left(\begin{array}{c } u_t-1 \\ r_t -1 \end{array}\right ) ( \mu p_d)^{r_t-1 } ( 1-\mu p_d)^{(u_t - r_t ) } \right .\nonumber \\ & & \left .+ \phi ( r_t-1 , u_t-1,b_t,\mu ) \right ] \nonumber \end{aligned}\ ] ] | group - buying ads seeking a minimum number of customers before the deal expiry are increasingly used by the daily - deal providers . unlike the traditional web ads , the advertiser s profits for group - buying ads depends on the time to expiry and additional customers needed to satisfy the minimum group size . since both these quantities are time - dependent , optimal bid amounts to maximize profits change with every impression . consequently , traditional static bidding strategies are far from optimal . instead , bid values need to be optimized in real - time to maximize expected bidder profits . this online optimization of deal profits is made possible by the advent of ad exchanges offering real - time ( spot ) bidding . to this end , we propose a real - time bidding strategy for group - buying deals based on the online optimization of bid values . we derive the expected bidder profit of deals as a function of the bid amounts , and dynamically vary bids to maximize profits . further , to satisfy time constraints of the online bidding , we present methods of minimizing computation timings . subsequently , we derive the real time ad selection , admissibility , and real time bidding of the traditional ads as the special cases of the proposed method . we evaluate the proposed bidding , selection and admission strategies on a multi - million click stream of 935 ads . the proposed real - time bidding , selection and admissibility show significant profit increases over the existing strategies . further the experiments illustrate the robustness of the bidding and acceptable computation timings . |
over the past 30 years , much effort has been devoted to calculations of the renormalised stress - energy tensor in ground states of quantum fields on stationary background spacetimes .many analogous calculations have been made in flat spacetime equipped with reflecting boundaries , in connection with the casimir effect .however , it would be fair to say that only limited qualititative insight has been gained .for example , the energy density is sometimes positive , and sometimes negative and there is no known way of predicting the sign in any general situations without performing the full calculations ( see , however , for a situation where the sign can be predicted ) .at least analytically , these calculations are restricted to cases exhibiting a high degree of symmetry .the aim of this paper , and a companion paper , is to point out that there are situations in which one may gain some qualitative insight into the possible magnitude of the stress - energy tensor based on simple geometric considerations .the situation we study in this paper arises when a spacetime contains a subspacetime which is isometric to ( a subspacetime of ) another spacetime , which will usually have nontrivial symmetries . by using quantum energy inequalities ( qeis ) together with the locality properties of quantum field theory, we are then able to use information about the second ( symmetric ) spacetime to yield information about the stress - energy tensor of states on the first spacetime ( which need have no global symmetries ) in the region where the isometry holds .we will work on globally hyperbolic spacetimes in this paper , deferring the issue of spacetimes with boundary to a companion paper .as well as setting out the theory behind the method , we will demonstrate it in several locally minkowskian spacetimes .marecki has also illustrated our approach , by considering the case of spacetimes locally isometric to portions of exterior schwarzschild .also begun here for the free massless scalar field is a similar discussion for conformally related regions of two - dimensional spacetimes . in a separate paperwe will extend this to the generalised maxwell field in higher dimensional manifolds related by conformal diffeomorphisms . to be more specific , consider a globally hyperbolic spacetime , consisting of a manifold of dimension , a lorentzian metric with signature , and choices of orientation and time - orientation ( which , together , are required to fulfill the demands of global hyperbolicity ) .is globally hyperbolic if it contains a cauchy surface , i.e. , a subset intersected exactly once by every inextendible timelike curve .the globally hyperbolic spacetimes are the most general class of spacetimes on which quantum fields are typically formulated , but one should be aware that manifolds with boundary are not included . ]suppose an open subset of , when equipped with the metric and ( time-)orientation inherited from , is a globally hyperbolic spacetime in its own right .if , moreover , any causal curve in whose endpoints lie in is contained completely in , then we will call a _ causally embedded globally hyperbolic subspacetime _( c.e.g.h.s . ) of .our main interest will be in the situation where a c.e.g.h.s . of is isometric to a c.e.g.h.s . of a second globally hyperbolic spacetime , with the isometry also respecting the ( time-)orientation .( we speak of a _ causal isometry _ in this case . ) by the principle of locality , we expect that any experiment conducted within should have the same results as the same experiment [ i.e. , its isometric image ] conducted in .no observer in should be able to discern , by such local experiments that she does not , in fact , inhabit ; in particular , energy densities in should be subject to the same qeis as those in .we will demonstrate explicitly that these expectations are met by the qeis we employ .( -15,1)(15,15 ) ( -8.5,2) ( -8.5,8.5) ( 8,2) ( 10.5,8.5) ( 8,5)(7,8)(9,10)(8,12.5 ) ( 7,8.75) ( -5,9)(-1,10)(0,10)(4.6,9 ) ( -0.2,11) among our results are the following , which we state for the case of a klein gordon field of mass in four dimensions : _ example 1 : _ suppose a timelike _ geodesic _ segment of proper duration in a globally hyperbolic spacetime can be enclosed in a c.e.g.h.s. which is causally isometric to a c.e.g.h.s . of four - dimensional minkowski space as shown in fig .[ fig : mink_embed ] .then any state of the klein gordon field ( of mass ) on obeys where the constant ( if , one may obtain even more rapid decay ) ._ example 2 : _ suppose a globally hyperbolic spacetime is stationary with respect to a timelike killing field and admits the smooth foliation into constant time surfaces .suppose the metric takes the minkowski form ( w.r.t . some coordinates ) on for some subset of with nonempty interior .( we may suppose that has been taken to be maximal . ) for any in the interior of , let be the radius of the largest euclidean -ball which can be isometrically embedded in , centred on , as in fig .[ fig : example2 ] .then any stationary hadamard state -point functionsare invariant under translations along the killing flow : , where is the group of isometries associated with the killing field . ] on obeys the bound for any , where is the unit vector along .( -7,0)(20,11 ) ( 0,0)(0,1.6 ) ( 0,1.6)(0,8 ) ( 0,8)(0,11 ) ( 0,11.5) ( 0,3)(4,3.9)(8,3.5)(10,4)(8.75,5)(13,6.5)(11,7)(8.5,6.2)(6,7)(4.5,6.7)(2,7.25)(0,5.5)(-2,4)(-1.75,3)(0,3 ) ( 4,5)(3.3,1.1 ) ( 4,5)(.2,.066 ) ( 4,5)(1.5,4.4 ) ( 4.5,5) ( 2.7,5.2) ( 3,1) ( -.5,3.8) _ example 3 :_ suppose is a uniformly accelerated trajectory ( parametrised by proper time ) with proper acceleration , and suppose can be enclosed within a c.e.g.h.s . of which is causally isometric to a c.e.g.h.s . of four - dimensional minkowski space .then , for any hadamard state on , and any smooth compactly supported real - valued , with , note the remarkable fact that the right - hand side is precisely the expected energy density in the rindler vacuum state along the trajectory with constant proper accleration .in particular , if the energy density in some state is constant along , it must exceed or equal that of the rindler vacuum .we emphasize that our derivation does not involve the rindler vacuum , but only the minkowski vacuum state two - point function and the qeis .variants of these results hold in other dimensions , and also for other linear field equations such as the maxwell and proca fields ( which we will treat elsewhere ) . to prepare for our main discussion, it will be useful to make a few general remarks about quantum energy inequalities ( qeis ) , also often called simply quantum inequalities ( qis ) .qeis have been quite intensively developed over the past decade , following ford s much earlier insight that quantum field theory might act to limit the magnitude and duration of negative energy densities and/or fluxes , thereby preventing macroscopic violations of the second law of thermodynamics ( see for rigorous links between qeis and thermodynamical stability ) .detailed reviews of qeis may be found in .qeis take various forms , but we will distinguish two basic types : absolute qeis and difference qeis .an absolute qei bound consists of a set of _ sampling tensors _ ,i.e. , second rank contravariant tensor fields against which the renormalised stress - energy tensor will be averaged , a class of states of the theory ( which may be chosen to have nice properties ) and a map such that for all states be convex ( i.e. , if and are in then so is for all ] .but this vanishes for hypersurface orthogonal ; see , e.g. , appendix c.3 in . ] ) then , as shown in , the qei eq. becomes ^au^bg(\tau)^2\,d\tau \ge -\int_{-\infty}^{\infty } du\ , \left| \widehat{g}(u )\right|^2 q_{\gamma,\omega_0}(u)\ , , \label{eq : qwei_withq}\ ] ] where is a positive polynomially bounded function defined by additionally , if is a ground state ( as was the case in ) one may show that for , and so the function is supported on the positive half - line only .more generally , it is always the case that decays rapidly as , so is always well - defined .technically , is a measure , and may have -function spikes which would exhibit themselves as discontinuities in .since we define as an integral over the open interval , it is continuous from the left .a similar analysis holds for the qnei eq ., provided that the null vector field has vanishing lie derivative along , , because we have for some distribution . to conclude this section , we mention that more general qei bounds may be constructed along similar lines , based on other decompositions of the contracted stress energy tensor as a sum of squares .this includes bounds averaged over spacetime volumes , see , e.g. .however we will not need this generality here , and observe only that one would need to ensure that such decompositions are made in a canonical fashion to obtain a locally covariant bound .in this section we develop some simple consequences of the qeis described in secs . [sect : lcaqeis_examples ] and [ sect : lcdqeis_examples ] , specialised to minkowski space .these will then be utilised in more general spacetimes using the local covariance properties of these bounds .our results are obtained by converting qei bounds into eigenvalue problems which can then be solved .for the most part , we will consider the scalar field of mass on -dimensional globally hyperbolic spacetimes for ; special features of massless fields in two dimensions will be treated in sec . [sect : gemassless2d ] .accordingly , let be a -dimensional globally hyperbolic spacetime , and let denote -dimensional minkowski space . as illustrated in fig .[ fig : mink_embed ] , let be a smooth , future - directed timelike curve , parametrised by proper time , and assume may be enclosed in a c.e.g.h.s . of so that is the image of a c.e.g.h.s . of under a causal isometric embedding .thus the curve is the image of a curve in ; because is an isometry , is also a proper time parametrisation , and has the same proper acceleration as for each . given any , define a sampling tensor on minkowski space by on smooth covariant rank - two tensor fields on , where is the velocity of .[ recall that means that is a real - valued smooth function whose support is compact , connected and contained in , and that has no zeros of infinite order in the interior of its support .] under the isometry , is mapped to , with action where is now any smooth covariant rank- tensor field on .applied to the stress - energy tensor , therefore provides a weighted average of the energy density along .our aim is to place constraints on these averages using the locally covariant difference qwei given in theorem [ thm:4d_qwei ] . by local covariance ,[ cor : diffmink ] guarantees that where is the minkowski vacuum state. we will be particularly interested in the least upper bound of the energy density along , since the energy density is smooth , this value must be the maximum value taken by the field on the closure of the track of . using the trivial estimate for each , we have and , putting this together with eq ., we obtain the inequality which holds , in the first place , for all . in the next two subsections we will analyse this in two special cases : namely , inertial motion and uniform acceleration .when is inertial the qwei of theorem [ thm:4d_qwei ] takes the simpler form described in eqs . and above : where and the constant is , where is the area of the unit -sphere .( notation varies slightly from that used in . ) for all , it is clear that for all , while one may show that on the same domain and that the maximum of this expression on occurs at the ( unique ) solution to , which is numerically .now , so we find that on as claimed . ] .using these results , we may estimate eq . rather crudely by with for and . note that we have made two changes here :( a ) has been replaced by unity ; ( b ) the lower integration limit has been replaced by zero . we now specialise to even dimensions , .because is real - valued , is even and we may write where is the differential operator and we have used parseval s theorem , and the fact that vanishes outside . inserting the above in eq . , we have shown that obeys the inequality for all .the class is inconvenient to work with directly ; fortunately , the same inequality holds for general , as we now show .first , any is the limit of a sequence of for which and in ( see appendix [ appx : smooth ] ) . applying the above inequality to each , we may take the limit to conclude that it holds for as well .having established the result for arbitrary real - valued , we extend to general complex - valued by applying it to real and imaginary parts separately , and then adding . accordingly the inequality eq .holds for all . integrating by parts times , and noting that no boundary terms arise because vanishes near the boundary of , eq . may be rearranged to give where denotes the usual -inner product on , and the operator on .our aim is now to minimise the right - hand side over the class of at our disposal ( excluding the identically zero function ) . nowthe operator is symmetric is symmetric on a domain if for all , which shows that the adjoint agrees with on , but does not exclude the possibility that has a strictly larger domain of definition than . ] and positive , i.e. , for all . by theorem x.23 in ) , the solution to our minimisation problem is the lowest element of the spectrum of , the so - called _ friedrichs extension _ of .this is a self - adjoint operator with the same action as on , but which is defined on a larger domain in . in particular, every function in the domain of obeys the boundary condition at .( see , where the technique of reformulating quantum energy inequalities as eigenvalue problems was first introduced , and which contains a self - contained exposition of the necessary operator theory . )one may think of this as a precise version of the rayleigh ritz principle .once we have determined , we then have the bound so the problem of determining the lower bound is reduced to the analysis of a schrdinger - like equation , subject to the boundary conditions mentioned above .the two examples of greatest interest to us are and , representing two- and four - dimensional spacetimes . starting with ,let us suppose that is the interval for some .we therefore solve subject to dirichlet boundary conditions at ; as is well known , the lowest eigenvalue is and corresponds to the eigenfunction .[ a possible point of confusion is that , if is extended so as to vanish outside , it will not be smooth .however there is no contradiction here : the point is that the infimum is not attained on . ]thus we have because [ by convention , the zero - sphere has area .we may infer , without further calculation , that the bound must be zero if , because ( returning to the ritz quotient eq . ) , the infimum over all functions in must be less than or equal to the infimum over all functions in for any bounded ( a similar argument applies to the semi - infinite case ) .thus can be no greater than zero ; on the other hand , the minimum can not be negative either , because the original functional is nonnegative . accordingly eq .holds in all cases , with equal to the length of the interval . in the four - dimensional case , we proceed in a similar way , solving subject to at . in the case where is bounded , [ without loss of generality ], the spectrum consists only of positive eigenvalues .it is easy to see that the solutions to the eigenvalue equation are linear combinations of trigonometric and hyperbolic functions .the lowest eigenfunction solution which obeys the boundary conditions is where is the minimum positive solution to since , we obtain if is semi - infinite or infinite , we may argue exactly as in the two - dimensional case that the bound vanishes , in agreement with the formal limit .clearly this approach will give similar results in any even dimension , with a consequent increase in complexity in solving the eigenvalue problem .nonetheless , it is clear that the resulting bound will always scale as . in fact , this is even true in odd spacetime dimensions , where the eigenvalue problem would involve a nonlocal operator and is not easily tractable .we summarise what has been proved so far in the following way .[ prop : emaxmassive ] let be a globally hyperbolic spacetime of dimension and suppose that a timelike geodesic segment of proper duration may be enclosed in a c.e.g.h.s . of which is causally isometric to a c.e.g.h.s . of minkowski space ,then for all hadamard states of the klein gordon field of mass on .the constants depend only on .in particular , , while .* remark : * when the field has nonzero mass , we can expect rather more rapid decay than given by this estimate . to see why , return to the argument leading to eq . .if we reinstate as the lower integration limit , we have suppose for simplicity that .if we write , for , a change of variables yields where the nonnegative quantity decays rapidly as , owing to the rapid decay of .thus the estimate eq .is quite crude when ; it is hoped to return to this elsewhere .equipped with prop .[ prop : emaxmassive ] , we may now address the first two examples presented in the introduction .first , the proposition asserts that no hadamard state can maintain an energy density lower than for proper time along an inertial curve in a minkowskian c.e.g.h.s . of .in particular , this justifies the claim made in example 1 in the introduction .our bounds clearly depend only on , which in turn is controlled by the size of the minkowskian region . by choosing the curve and in an appropriate way, fairly simple geometrical considerations can thus provide good _ a priori _ bounds on the magnitude and duration of negative energy density .a good illustration is the following ( which includes example 2 in the introduction ) .suppose that a -dimensional globally hyperbolic spacetime with metric is stationary with respect to timelike killing vector and admits the smooth foliation into constant time surfaces .suppose there is a ( maximal ) subset of , with nonempty interior , for which takes the minkowski form on .choose any point in , with and suppose that we may isometrically embed a euclidean -ball of radius in , centred at ( see fig .[ fig : example2 ] ) .then the interior of the double cone is a c.e.g.h.s . of which is isometric to a c.e.g.h.s .of minkowski space , and contains an inertial curve sgement parametrised by the interval of proper time .any hadamard state on therefore obeys along .writing for the minimum distance from to the boundary of , it is clear that this inequality holds for all and hence , by continuity , for . moreover , if the state is stationary [ for example , if it is the ground state ] , then the energy density takes a constant value along and we obtain for any , where is the unit vector along . in this waywe obtain a universal bound on the fall - off of negative energy densities in such spacetimes , which could be used to provide a quantitative check on exact calculations , if these are possible , or to provide some precise information in situations where they are not .the bound is of course very weak close to the boundary of : this does not imply that the energy density diverges as this boundary is approached , of course , but merely indicates that it would not be incompatible with the quantum inequalities for there to exist geometries on for which the stationary energy density just outside might be very negative . to conclude this subsection ,let us briefly discuss the null - contracted qei eq . in the present context . for simplicity , we restrict ourselves to four dimensions .suppose is a nonzero null vector field which is covariantly constant along , so , in particular , is also constant on .our sampling tensor is now defined to be with action on smooth covariant rank- tensor fields on . in exactly the same way as for the qwei discussed above , we may apply local covariance to the qnei of thm .[ thm:4d_qnei ] , so yielding where , as shown in , for the massless scalar field ( and in fact this bound also constrains the massive field too ) .this differs from the corresponding qwei by a factor of [ recall that ] , so we may immediately deduce the following result . [ prop : nullmax4d ] let be a four - dimensional globally hyperbolic spacetime and suppose that a timelike geodesic segment of proper duration may be enclosed in a c.e.g.h.s . of which is causally isometric to a c.e.g.h.s .of minkowski space. if is a covariantly constant null vector field on then we have for any hadamard state of the klein gordon field , where .this result justifies the claim made above eq .( 38 ) of , where an application is presented .we now turn to the case where has uniform constant proper acceleration . for simplicitywe consider only massless fields in four dimensions , but expect similar results in more general cases .we need to estimate where is supported on the uniformly accelerated worldline in .it will be convenient to drop the tilde from and the subscript from . without loss of generality, we may assume is parametrized so that where .the first step in our calculation is to set up an orthonormal tetrad field surrounding the worldline , which , satisfies the two properties required : namely , that agrees with the velocity on , and that the frame is invariant under fermi walker transport along .the required bound is then given by where we evaluate this quantity in stages , beginning by noting that w_{{\boldsymbol{m}}}^{(2)}(x , x')\ , , \label{eq : rindler_edensity}\end{aligned}\ ] ] where where is the limit ( in the distributional sense ) as of thus we are in the situation of eq ., and the bound becomes where to obtain the required fourier transform , we first use contour integration ] .the contour encloses a single pole , of fourth order , at and the contribution of the ` short ' sides vanishes as .one also exploits the fact that the contributions from the two ` long ' sides are equal up to a factor of . ] to find which decays exponentially as , provided . taking the limit it is easy to check that note that this fourier transform has support on the whole real line , not just the positive half line .thus our aim is now to estimate in order to obtain a bound which may be analysed by eigenvalue techniques as in the previous subsection .beginning in the half - line , we may estimate since is everywhere increasing . on the other hand , for , we may split the integral into and the contribution from ] . ] both the coefficient and the numerical evaluation of the lower bound are plotted in fig .[ fig : misner_bnd ] .it is obvious that , and thus the energy density obey the qei constraint for all values of .the bound eq .is still weaker .the rindler spacetime is the `` right wedge '' of minkowski space , i.e. , the region in inertial coordinates .we may also make the coordinate transformation x = \xi \cosh(\eta ) , & \qquad\qquad & z = z , \end{array}\ ] ] to obtain the metric in the form with coordinate ranges , .lines of constant , when mapped into minkowski space , are worldlines for observers undergoing constant proper acceleration .rindler spacetime is static with respect to ( corresponding to lorentz invariance in the plane ) and is invariant under euclidean transformations of the plane .clearly any line of constant meets the conditions of prop .[ prop : emax_4d_accel ] and we may immediately read off that any static hadamard state on must obey where is the unit vector parallel to .in particular , this provides a constraint on the energy density in the ground state ( which is hadamard ) .this may also be computed exactly : it was first computed for the conformally coupled scalar field by candelas and deutsch and one can easily generalize their results to the minimally coupled scalar field to obtain , replace the in the numerator with . ] which is exactly the lower bound given above .thus , remarkably , the rindler ground state saturates the qei constraints , which were obtained using local covariance and the minkowski vacuum , and nowhere involved .diagram showing rindler spacetime ( with the two perpendicular space dimensions suppressed ) embedded into minkowski spacetime .the dashed hyperbolic line , the worldline of a constantly accelerating observer , is the image of a constant observer s worldline in rindler coordinates .the grey diamond is a causal region that can be isometrically identified between the two `` different '' coordinate systems . ]let us also examine how an upper bound might be obtained .let in coordinates and set as usual .we consider sampling along , with sampling tensors of form for . since the energy density is constant along , the upper bound of cor .[ cor : diffmink ] gives the right - hand side can be read off from the difference qei derived by pfenning for the electromagnetic field , because the corresponding bound for the scalar field is exactly half of the electromagnetic expression : , rather than proper time : our is related to the of by . ] next consider scaling the test function , replacing by .we find , considering the scaling behavior of the above expression , .thus we find consistency with the known fact that the expectation value of the rindler ground state is bounded above by zero , i.e. .in this paper we have initiated the study of interrelations between quantum energy inequalities and local covariance .we have formulated definitions of locally covariant qeis , and shown that existing qeis obey them , modulo small additional restrictions ( sec .[ sec : qeis_and_lc ] ) . the main thrust of our work has been directed at providing _ a priori _ constraints on renormalised energy densities in locally minkowskian regions , accomplished in sec . [sec : general_apps ] .the simple geometric nature of these bounds makes them easy to apply in practice , and a number of future applications are envisaged . in particular , we will discuss applications to the casimir effect in a companion paper ; at the theoretical level , it is possible to place the present discussion in the categorical language of , and this will be done elsewhere .equally important are the specific calculations reported in sec .[ sec : specific_apps ] .here we saw that , in some situations , the qei bounds give best - possible constraints on the energy density , and that typical ground state energy densities are not over - estimated by the qei bound by more than a factor of about at worst ( in the examples so far studied ) . finally , although we confined our attention largely to locally minkowskian spacetimes in secs .[ sec : general_apps ] and [ sec : specific_apps ] , we emphasise again that other interesting cases may be studied using our general formalism , as , for example , in the work of marecki on spacetimes with locally schwarzschild subregions .in this appendix we describe the construction of the quantised klein gordon field within the algebraic approach to quantum field theory , and explain the construction of pulled back states used in sec .[ sect : localcovariance ] .the free scalar field of mass may be quantised on any globally hyperbolic spacetime in the sense that one may construct a complex unital -algebra whose elements may be interpreted as ` polynomials in smeared fields ' .a typical element of the algebra is a complex linear combination of the identity and a finite number of terms each of which is a finite product of a number of objects where is a test function ( i.e. , smooth and compactly supported ) on .the algebra also satisfies a number of relations : 1 . 2 . 3 . 4 .= ie_{{{\boldsymbol{m}}}}(f , g){\leavevmode\hbox{\rm{\small1\kern-3.8pt\normalsize1}}}$ ] for all test functions on and complex scalars , where is the advanced - minus - retarded fundamental solution to on .the first two axioms are necessary for compatibility with the idea of as a smeared hermitian field ; the third expresses the field equation in ` weak ' form ; the fourth expresses the commutation relations. now let be a causal isometric embedding of into .any test function on now corresponds to a test function on , defined by for and otherwise .we may use this to define a map between and such that 1 . 2 . for all test functions on 3 . extends to general elements of as a -homomorphism , i.e. , is linear and obeys and for all . in the body of the textwe have used the notation for , relying on the context for the appropriate meaning ; here , it is convenient to distinguish the two maps .one must check that the last statement is compatible with the axioms stated above the only nontrivial one is the commutation relation , where the causal nature of plays a key role and guarantees that is well - defined .what needs to be proved boils down to checking that for all test functions on .this equivalence is proved as follows .writing for the advanced ( ) and retarded green functions on , solves the inhomogeneous klein gordon equation on with source and support in .because is a causal isometry , the pull - back solves the inhomogeneous klein gordon equation on with source and support in ; by uniqueness of solution , we have . accordingly and the required result follows . in the algebraic approach we have been pursuing , a state of the quantum field on is a linear map from to the complex numbers , obeying and for any .one interprets as the expectation value of observable in state .in particular , each state yields a hierarchy of -point functions , i.e. , maps of the form we will restrict attention to those states whose corresponding -point functions are distributions .a state is hadamard if its two - point function has a particular singular structure which is determined by the local metric and causal properties of the spacetime .note that none of the structure introduced so far invokes any particular hilbert space representation of the theory .now suppose again that is a causal isometric embedding of into and let be a state on .we obtain a state on by for any ; that is , , where is the dual map to ( in the body of the text , we have written for ) .the -point functions are therefore related by it is useful to write this in ` unsmeared ' notation . let be the -point functions of . then the last equation becomes where the change of variables employed in the last step is justified by the fact that is an isometry .as this holds for all choices of , we may deduce that that is , the -point functions of are the pull - backs by of those of .it follows that if is hadamard then so too is , because the two - point function is simply pulled back under and the hadamard series is constructed from the local causal and metric structure which is preserved under .since the stress - energy tensor is renormalised by subtracting the first few terms of the hadamard series from the two point function , and then taking suitable derivatives before taking the coincidence limit ( making a further locally constructed correction to ensure conservation of the stress tensor ) , we have the following important consequence , which we isolate as a theorem .suppose is a causal isometric embedding of globally hyperbolic spacetime in a globally hyperbolic spacetime .any hadamard state of the massive klein gordon quantum field on induces a hadamard state of the same theory on , whose -point functions and renormalised expected stress - energy tensor are the pull - backs by of the corresponding quantities on .this fits in with the principle that one should not be able to tell , by local experiments , whether one is in or its image within the larger spacetime .it also justifies us in the abuse of notation perpetrated in sec .[ sect : localcovariance ] , where we wrote in place of , and ( dually ) in place of .let us conclude by briefly describing more of the structure set out by .the key is the observation that the globally hyperbolic spacetimes of given dimension form the objects of a category in which the morphisms are causal isometric embeddings .one may also consider a category of unital -algebras with injective unit - preserving -homomorphisms as morphisms .the association of a globally hyperbolic spacetime with the corresponding algebra is then shown to be a covariant functor between these categories and gives a precise meaning to the notion of ` the same field theory on different spacetimes ' ( and the same would be true even for theories not necessarily described in terms of a lagrangian ) .a similar functorial description may be given to the association of the state space of the theory , and quantum fields are reinterpreted as natural transformations between functors .we refer the reader to for full details .in this appendix , we calculate quantum weak energy inequalities for the massless , minimally coupled real scalar field in the two dimensional cylinder spacetime relative to the ground and thermal equilibrium states .we use the notation of sec .[ sec:2d_cyl ] .the kms state at inverse temperature has two - point function ( see , e.g. , eq . ( 2.43 ) of ) where and , and the sum converges in the distributional sense ( i.e. , after smearing each term with test functions , the resulting series converges and its sum depends continuously on the test functions ) .we exclude the zero mode as usual , regarding as a state on the derivative fields .the two - point function of the ground state is obtained as the zero temperature ( ) limit of this expression .we will be interested in the static curve , and employ the tetrad , , which is invariant under fermi walker transport along . following the procedure of sec .[ sect : lcdqeis_examples ] , we find and taking the fourier transform , we have and therefore obtain note that is supported on in the case , but otherwise on the whole of , albeit exponentially suppressed on the negative half line .to arrive at convenient qei bounds we estimate by where is the heaviside step function . to see that this estimate is valid , we note that is clearly increasing on the negative half - line , so it is valid to bound it by on ; the second term in the estimate arises by noting that for all . using this estimate again, we also have while , for where is the greatest integer _strictly _ less than .thus we have the estimate it then follows that , in the notation of sec .[ sec:2d_cyl ] , latexmath:[\[\begin{aligned } { { \mathcal q}}^{\rm weak}_{{\boldsymbol{c}}}({{\sf f}},\omega_{{{\boldsymbol{c}}},\beta } ) & = & \int_{-\infty}^\infty du \,|\widehat{g}(u)|^2 q_{\gamma,\omega_{{{\boldsymbol{c}}},\beta}}(u ) \nonumber\\ & \le & \frac{e^{\pi\beta / l}}{4l^2\sinh^3\pi\beta / l}\int_{-\infty}^0 |\widehat{g}(u)|^2\,du + \frac{1}{2\pi^2(1-e^{-2\pi\beta / l})}\int_0^\infty u^2 |\widehat{g}(u)|^2 \,du \\ & \le & \frac{\pi e^{\pi\beta / l}}{2l^2\sinh^3\pi\beta / l}\int_{-\infty}^\infty \frac{1}{2\pi(1-e^{-2\pi\beta / l})}\int_{-\infty}^\infty theorem and the fact that is even for real - valued to convert the integral over into one over . for the ground state, of course , this yields latexmath:[\[{{\mathcal q}}^{\rm weak}_{{\boldsymbol{c}}}({{\sf f}},\omega_{{{\boldsymbol{c } } } } ) \le \frac{1}{2\pi}\int_{-\infty}^\infty estimates are not very sharp , they have led to a very simple quantum inequality .in fact , for the ground state , this inequality is only times less restrictive than the optimal quantum inequality bound found in two dimensional minkowski spacetime .recall from sec . [sect : lcdqeis_examples ] that we define to be the set of smooth compactly supported real - valued functions on whose support is connected and which have no zeros of infinite order in the interior of that support ( equivalently , has no zeros in of infinite order ) .our aim is to prove the following result ._ proof : _ if is identically zero the result is trivial , so we assume henceforth that it is not , so is strictly positive .suppose the stated result is false , so there exists an such that for all .choose sufficiently large that exceeds the diameter of the support of .by hypothesis , for each the function has a zero of infinite order within its support ; since this zero must lie in the support of and is therefore a point at which vanishes , while takes the value .we may therefore choose such that for each and runs through the values ( not necessarily in order ) .using taylor s theorem with remainder at each , so which is a contradiction , since and must belong to the support of . this work was initiated at the erwin schrdinger institute for mathematical physics , vienna , during a program on quantum field theory in curved spacetime .we are grateful to the organizers of this program and to the institute for its hospitality and financial support .cjf further thanks the isaac newton institute , cambridge , for support during the final stages of the work , and the organisers of the programme on global problems in mathematical relativity .we also thank l.h .ford , a. higuchi , b.s .kay , j. loftin and t.a .roman for many illuminating discussions , and l. osterbrink and c.j .smith for comments on the manuscript .this work was partially supported by epsrc grant gr / r25019/01 to the university of york and by a grant from the us army research office through the usma photonics research center .mjp also thanks the university of york for a grant awarded under its `` research funding for staff on fixed term contracts '' scheme . | we begin a systematic study of quantum energy inequalities ( qeis ) in relation to local covariance . we define notions of locally covariant qeis of both ` absolute ' and ` difference ' types and show that existing qeis satisfy these conditions . local covariance permits us to place constraints on the renormalised stress - energy tensor in one spacetime using qeis derived in another , in subregions where the two spacetimes are isometric . this is of particular utility where one of the two spacetimes exhibits a high degree of symmetry and the qeis are available in simple closed form . various general applications are presented , including _ a priori _ constraints ( depending only on geometric quantities ) on the ground - state energy density in a static spacetime containing locally minkowskian regions . in addition , we present a number of concrete calculations in both two and four dimensions which demonstrate the consistency of our bounds with various known ground- and thermal state energy densities . examples considered include the rindler and misner spacetimes , and spacetimes with toroidal spatial sections . in this paper we confine the discussion to globally hyperbolic spacetimes ; subsequent papers will also discuss spacetimes with boundary and other related issues . |
for a long time quantum entanglement was only of philosophical interest and researchers were mainly focusing on addressing the questions that were related with the quantum mechanical understanding of various fundamental notions like reality and locality .however , for the last two decades world had seen that quantum entanglement is not only a philosophical riddle but also a reality as far as the laboratory preparation of entangled qubits are concerned .researches that were conducted during these decades were not all concerned about its existence but mostly about its usefulness as a resource to carry out information processing protocols like quantum teleportation , cryptography , superdense coding , and in many other tasks .it was subsequently evident from various followed up investigations that quantum entanglement plays a pivotal role in all these information processing protocols .therefore , understanding the precise nature of entanglement in bipartite and multiparty quantum systems has become the holy - grail of quantum information processing .however , the precise role of entanglement as a resource in quantum information processing is not fully understood and it was suggested that entanglement is not the only type of correlation present in quantum states . this is because lately some computational tasks were carried out even in the absence of entanglement .this provided the foundation to the belief that there may be correlation present in the system even in the absence of entanglement .hence , researchers redefined quantum correlation from the information theoretic perspective .this gave rise to various measures of quantum correlation , the predominant of them being quantum discord .though there are issues that need to be addressed , in much deeper level quantum discord temporarily satisfies certain relevant questions .subsequently , quantum discord has been given an operational interpretation in different contexts like quantum state merging and remote state preparation .in addition , extension of the notion of quantum discord to multi qubit cases has been proposed .many works were done in the recent past to investigate the dynamics of quantum correlation in open systems by comparing the evolution of different types of initial states in specific models .these states are typically two qubits coupled with two local baths or one common bath . in principle , there are several factors that can affect the evolution , namely , the initial state for the system and environment , the type of system - environment interaction and the structure of the reservoir. a more relevant question will be how robust are these measures when they are subjected to the noise in quantum channels .it is mainly inspired by the studies of sudden death of entanglement for two qubits , having no direct interaction .entanglement sudden death ( esd ) is said to occur when the initial entanglement falls and remains at zero after a finite period of evolution for some choices of the initial state .esd is a potential threat to quantum algorithms and quantum information protocols and thus the quantum systems should be well protected against noisy environments .another possible way to circumvent such resource vanishing is to make use of resources which do not suffer from sudden death . at this point, one can ask a similar question : _ does quantum discord present similar behavior ? _ in the first study addressing this question , researchers have compared the evolution of concurrence and discord for two qubits , each subject to independent markovian decoherence ( dephasing , depolarizing and amplitude damping ) .looking at initial states such as werner states and partially - entangled pure states , the authors find no sudden death of discord even when esd does occur ; quantum discord decays exponentially and vanishes asymptotically in all cases .however , not much is known about the effects on multipartite correlation with time when they are transferred through noisy quantum channels . in this work ,we study the dynamics of quantum dissension of three qubit states which happens to be a measure of multi party quantum correlation , under the effect of various quantum noisy channels .in addition , we also study the dynamics of monogamy score of these three qubit states in presence of channel noise . in section 2 ,we provide a detail descriptions of quantum dissension and monogamy score of quantum correlation . in section 3, we study the effect of various noisy channels on quantum correlation and monogamy score when all the qubits are transferred through them .finally , we conclude in section 4 by discussing future directions of explorations .in classical information theory , the total correlation between two random variables is defined by their mutual information .if x and y are two random variables , the mutual information is obtained by subtracting the joint entropy of the system from the sum of the individual entropies .mathematically , this can be stated as : where defines shannon entropy function . another equivalent way of expressing mutual information is by taking into account the reduction in uncertainty associated with one random variable due to the introduction of another random variable .stated formally as , or where defines conditional entropy of given that has already occurred and vice versa .all these above expressions are equivalent in classical information theory .when we try to quantify correlation in quantum systems from an information theoretic perspective , natural extension of these quantities will be obtained by replacing random variables with density matrices , shannon entropy with von neumann entropy and apposite definition of the conditional entropies . stated mathematically , the quantum mutual information is given by , where is the composite density matrix , and are the local density matrices and defines von neumann entropy function .similarly , by applying the argument of reduction of uncertainty associated with one quantum system with introduction of another quantum system , one can have the alternative definition of mutual information as , and here is the average of conditional entropy and is obtained after carrying out a projective measurement on subsystem and vice versa .the projective measurement is done in the general basis = cos + sin , = sin - cos , where and have the range [ 0,2 .hence , the quantum conditional entropy can be expressed as , = where ] .it is important to note over here that is different from what will be the straightforward extension of classical conditional entropy . in quantum information ,the meaning of conditional entropy of the qubit given that has occurred is the amount of uncertainty in the qubit given that a measurement is carried out on the qubit .consequently , the expressions , and are not equivalent in the quantum domain .the differences between and are captured by quantum discord i.e. one variant of quantum discord is the geometric quantum discord which is defined as the distance between a quantum state and the nearest classical ( or separable ) state , .quantum discord has been established as a non - negative measure of correlation for any quantum states .subsequent researches were carried out to obtain an analytical closed form of quantum discord and was found for certain class of states . an unified geometric view of quantum correlations which includes discord , entanglement along with the introduction of the concepts like quantum dissonance was given in .one of the natural extension of quantum discord from two qubit to three qubit systems is quantum dissension .introduction of three qubits naturally brings in one and two - particle projective measurement into consideration .these measurements can be performed on different subsystems leading to multiple definitions of quantum dissension . in other wordsa single quantity is not sufficient enough to capture all aspects of correlation in multiparty systems .quantum dissension in this context can be interpreted as a vector quantity with values of correlation rising because of multiple definitions as various components . however , in principle when we define correlation in multi qubit situations , measurement in one subsystem can enhance the correlation in other two subsystems and thereby making quantum dissension to assume negative values .we emphasize on all possible one - particle projective measurements and two - particle projective measurements .the mutual information of three classical random variables in terms of entropies and joint entropies , are given by +h(x , y , z).\end{gathered}\ ] ] it is also possible to obtain an expression for mutual information that involves conditional entropy with respect to one random variable : one can define another equivalent expression for classical mutual information that includes conditional entropy with respect to two random variables : \\ -[h(x , y)+h(x , z)]+h(x|y , z).\end{gathered}\ ] ] these equivalent classical information - theoretic definitions forms our basis for defining quantum dissension in the next subsections .let us consider a three - qubit state where and refer to the first , second and the third qubit under consideration .the quantum version of obtained by replacing random variables with density matrices and shannon entropy with von neumann entropy reads , +s(\rho_{xyz}).\end{gathered}\ ] ] the quantum version of , obtained by appropriately defining conditional entropies , is given by where refer to a one particle projective measurement on the subsystem performed on the basis = cos + sin , = sin - cos where and lies in the range [ 0,2 .quantum dissension function for single particle projective measurement is given by the difference of and , i.e. quantum dissension is given by the quantity = min( ) , where the minimization is taken over the entire range of basis parameters in order for to reveal maximum possible quantum correlation .the natural extension of in the quantum domain is given by , \\-[s(\rho_{xy})+s(\rho_{xz})]+s(\rho_{x|{\pi_{jyz}}}).\end{gathered}\ ] ] the two - particle projective measurement is carried out in the most general basis : = cos + sin , = sin - cos , = cos + sin , = sin - cos , where , [ 0,2 . in this case , the average quantum conditional entropy is given as with ] . to define quantum dissension for two - particle projective measurement , we once again take the difference of the equivalent expressions of mutual information , i.e. the discord function is also interpreted as quantum discord with a bipartite split of the system .one can minimize over all two - particle measurement projectors to obtain dissension as =min( ) .this is the most generic expression since it includes all possible two - particle projective measurements .both and together form the components of correlation vector defined in context of projective measurement done on different subsystems .monogamy of quantum correlation is an unique phenomenon which addresses distributed correlation in a multiparty setting .it states that in a multipartite situation , the total amount of individual correlations of a single party with other parties is bounded by the amount of correlation shared by the same party with the rest of the system when the rest are considered as a single entity .mathematically , given a multipartite quantum state shared between n parties , the monogamy condition for a bipartite correlation measure should satisfy where = .it had been shown that certain entanglement measures satisfy the monogamy inequality .however , there are certain measures of quantum correlation , including quantum discord , which behave differently as far as the satisfying of monogamy inequality is concerned . by the term violation of monogamy inequality for certain measure, we actually refer to a situation where we can indeed find entangled states which violates the inequality for that measure . in case of quantum discord, it had been seen that w states violates the inequality and are polygamous in nature .more specifically , researchers considered the monogamy score = +- , ( where and are the traced out density matrices from and is quantum discord ) and checked whether three - qubit states violate or satisfy the inequality 0 .in this section , we investigate the dynamics of quantum dissension when three - qubit states are transferred through noisy quantum channels .moreover , we also study the change of the monogamy score for various initial states with time and purity of the state .we consider initial states to be mixed ghz , mixed w , classical mixture of two separable states , a mixed biseparable states and the quantum channels to be amplitude damping , phase damping and depolarizing . given an initial state for three qubits , its evolution in the presence of quantum noise can be compactly written as , where are the kraus operators satisfying =i for all . for independent channels , where describes one - qubit quantum channel effects .we analytically present the dynamics of each initial state with respect to the individual channels .in other words we present the dynamics of each of , and . in each case, we apply the channel for sufficient time i.e. t=10 seconds . in this subsection, we consider the effect of generalized amplitude damping channel on various three - qubit quantum states .the amplitude damping channel describes the process of energy dissipation in quantum processes such as spontaneous emission , spin relaxation , photon scattering and attenuation etc .it is described by single - qubit kraus operators = diag(1, ) , = ( )/2 , = diag(,1 ) , = ( )/2 , where defines the final probability distribution when ( q=1 corresponds to the usual amplitude damping channel ) . here=1- , representing the decay rate . __ + we consider the three - qubit mixed ghz state ( we universally take as the classical randomness ) as the initial state . the matrix elements of the density operator for a certain time , or for a certain value of parameter are given by , ,{}\nonumber\\ & & \rho_{22}=\rho_{33}=\rho_{55}=\frac{1}{8}(1-\gamma)[(1+\gamma)^2-p(1-\gamma)(3\gamma+1)],{}\nonumber\\ & & \rho_{44}=\rho_{66}=\rho_{77}=\frac{1}{8}(1-\gamma)^2[(1+\gamma)+p(3\gamma-1)],{}\nonumber\\ & & \rho_{88}=\frac{1}{8}(1 + 3p)(1-\gamma)^3,\rho_{18}=\frac{p}{2}(1-\gamma)^\frac{3}{2}.\\end{aligned}\ ] ] it is evident from fig.1 , and attains values at and and decays asymptotically till each of them approaches .the amplitude damping channel leaves the final population state at ( diag)^{\otimes 3} ] resulting in zero quantum dissension . in fig.3(a ) , we study the evolution of monogamy score with time and interestingly we find that for certain values of the parameter , the monogamy score changes from negative to positive .this is a clear indication of the fact that the states which are initially monogamous are entering into the polygamous regime with time . __ + we take classical mixture of separable states and given by the density matrix .the dynamics of this mixture under the action of amplitude damping channel in terms of the matrix elements is as follows ,{}\nonumber\\ & & \rho_{12}=\rho_{13}=\rho_{15}=\frac{1}{8}(1-p)\sqrt{1-\gamma}(1+\gamma)^2,{}\nonumber\\ & & \rho_{14}=\rho_{16}=\rho_{17}=\rho_{23}=\rho_{25}=\rho_{35}=\frac{1}{8}(1-p)(1-\gamma^2),{}\nonumber\\ & & \rho_{18}=\rho_{27}=\rho_{36}=\rho_{45}=\frac{1}{8}(1-p)(1-\gamma)^\frac{3}{2},{}\nonumber\\ & & \rho_{22}=\rho_{33}=\rho_{55}=\frac{1}{8}(1-p)(1-\gamma)(1+\gamma)^2,{}\nonumber\\ & & \rho_{24}=\rho_{26}=\rho_{34}=\rho_{37}=\rho_{56}=\rho_{57}={}\nonumber\\ & & \frac{1}{8}(1-p)\sqrt{1-\gamma}(1-\gamma^2),{}\nonumber\\ & & \rho_{44}=\rho_{66}=\rho_{77}=\frac{1}{8}(1-p)(1-\gamma)^2(1+\gamma),{}\nonumber\\ & & \rho_{28}=\rho_{38}=\rho_{46}=\rho_{47}=\rho_{58}=\rho_{67}=\frac{1}{8}(1-p)(1-\gamma)^2,{}\nonumber\\ & & \rho_{48}=\rho_{68}=\rho_{78}=\frac{1}{8}(1-p)(1-\gamma)^\frac{5}{2},{}\nonumber\\ & & \rho_{88}=\frac{1}{8}(1-p)(1-\gamma)^3.\end{aligned}\ ] ] at , the maximum values ( -1.015,0.15 ) of quantum dissensions and are obtained for [ fig.4 ] . in thisparticular dynamics , we observe an interesting phenomenon that there is no exact asymptotic decay of quantum dissension .we observe the revival of quantum correlation for a certain period of time in the initial phase of the dynamics .this is something different from the standard intuition of asymptotic decay of quantum correlation when it undergoes dissipative dynamics .this remarkable feature can be interpreted as that the dissipative dynamics is not necessarily going to decrease quantum correlation with passage of time . on the contrary , depending upon the initial state it can enhance the quantum correlation for a certain period of time .we refer to this unique feature as _ revival of quantum correlation in dissipative dynamics_. however , the other dissension follows the standard process of asymptotic decay with time . in fig.3(b ) , we also compute the monogamy score and find that states which are initially polygamous are becoming monogamous with the passage of time .this is contrary to what we observed in case of mixed w states . in this case , the states initially freely shareable ( polygamous ) are entering into not freely shareable ( monogamous ) regime due to channel action .this is a remarkable feature as this helps us to obtain monogamous state from polygamous state .this is indeed helpful as monogamy of quantum correlation is an useful tool for quantum security . __ + now we provide another example where action of quantum noisy channel can revive quantum dissension for a short period of time in a much smooth manner compared to our previous example . here, we consider a mixed biseparable state : =+[+ .the dynamics of this density matrix at time is given by , ,{}\nonumber\\ & & \rho_{22}=\rho_{33}=\frac{1}{8}(1-\gamma^2)[(1+\gamma)+p(1-\gamma)],{}\nonumber\\ & & \rho_{44}=\frac{1}{8}(1-\gamma)^2[(1+\gamma)+p(1-\gamma)],{}\nonumber\\ & & \rho_{55}=\frac{1}{8}(1-p)(1-\gamma)(1+\gamma)^2,{}\nonumber\\ & & \rho_{66}=\rho_{77}=\frac{1}{8}(1-p)(1-\gamma)^2(1+\gamma),{}\nonumber\\ & & \rho_{88}=\frac{1}{8}(1-p)(1-\gamma)^3,{}\nonumber\\ & & \rho_{14}=-\rho_{23}=\frac{p}{4}(1-\gamma).\end{aligned}\ ] ] at for this state , both and are having the value 0 . however , quite surprisingly , we find that in the initial phase both dissension and increase and attain maximum values ( 0.00133,0.00183 ) and in the subsequent phases the values lower down and finally reach 0 [ fig.5 ] .this reiterates the fact that for certain initial states the dissipative dynamics acts as a catalyst and helps in revival of quantum correlation .this dynamics is different from our previous dynamics in the sense that here revival of quantum correlation is much more than the quantum correlation present in the initial state .this is indeed a strong signature that in multi - qubit cases the channel dynamics can take a zero - correlated to a correlated state .though the rise of correlation is not very high , however , in nmr systems this rise is significant as one starting with a zero - correlated state can use the state for computation at subsequent phases of time instead of trashing it away .the reduced density matrices are separable states for all values of time and purity , making making their discord equal to zero .here once again we have , = - and hence channel action does not change the monogamy property of the mixed biseparable state . _ _ + the density matrix elements of mixed ghz at time for are given as , ,\rho_{18}=\frac{p}{2}(1-\gamma)^\frac{3}{2},{}\nonumber\\ & & \rho_{ii}=\frac{1}{8}[1-p(1-\gamma)^2 ] , i=2, ... ,7.\\\end{aligned}\ ] ] here and starts decaying from ( -3.00,1.00 ) at , and approaches after sufficient time [ fig.6 ] .the decay of is not exactly asymptotic in contrast to the action of gad channel with q=1 .the decay of is asymptotic as in the case of gad channel with .the initial state evolves to final population distribution ( diag)^{\otimes 3} ]is left resulting in zero quantum dissension . for purity values closer to 1 ,the initial states are polygamous and they enter into the monogamy regime due to action of gad channel [ fig.8(a ) ] .the states with purity values closer to 0 are monogamous and do not experience any such transition . hence once again we have one such example where there is a useful transition from polygamous to monogamous regime . __ + we consider initial density matrix whose dynamics at time is as follows , ,{}\nonumber\\ & & \rho_{22}=\rho_{33}=\rho_{55}=\frac{1}{8}[1-p(1-\gamma)(\gamma^2 - 3\gamma+1)],{}\nonumber\\ & & \rho_{44}=\rho_{66}=\rho_{77}=\frac{1}{8}[1+p(1-\gamma)(\gamma^2-\gamma-1)],{}\nonumber\\ & & \rho_{88}=\frac{1}{8}[1+p(\gamma^3 - 1)],{}\nonumber\\ & & \rho_{12}=\rho_{13}=\rho_{15}=\rho_{24}=\rho_{26}=\rho_{34}=\rho_{37}=\rho_{48}=\rho_{56}{}\nonumber\\ & & = \rho_{57}=\rho_{68}=\rho_{78}=\frac{1}{8}(1-p)\sqrt{1-\gamma},{}\nonumber\\ & & \rho_{14}=\rho_{16}=\rho_{17}=\rho_{23}=\rho_{25}=\rho_{28}=\rho_{35}=\rho_{38}=\rho_{46}{}\nonumber\\ & & = \rho_{47}=\rho_{58}=\rho_{67}=\frac{1}{8}(1-p)(1-\gamma),{}\nonumber\\ & & \rho_{18}=\rho_{27}=\rho_{36}=\rho_{45}=\frac{1}{8}(1-p)(1-\gamma)^\frac{3}{2}.\end{aligned}\ ] ] once again it is evident from fig[9 ] , and achieve maximum values ( -1.015,0.15 ) at and .however , the decay profile of is much smoother than that of . the evolution of monogamy score [ fig .8(b ) ] is quite different for than that of . here also , all the initial polygamous density matrices enter into the monogamy regime irrespective of the values of parameter p. _ _ + we also studied the dynamics of the mixed biseparable state in presence of gad channel for and we found that both dissensions remain at zero starting from the initial state . in this subsection, we consider the dephasing channel and its action on various three - qubit states .a dephasing channel causes loss of coherence without any energy exchange .the one - qubit kraus operators for such process are given by =diag ( 1, ) and =diag(0, ) .+ _ _ + we once again consider the mixed ghz state subjected to dephasing noise .the density matrix elements of the mixed ghz at a time are given by , here we observe that the diagonal elements are left intact whereas the off - diagonal elements undergo change as a consequence of dephasing noise .interestingly , we find that is not at all influenced by dephasing channel whereas follows a regular asymptotic path [ fig.10 ] .the degradation observed in is due to progressively lower purity levels and is unaffected by dephasing noise .the reduced density matrices do not contribute towards monogamy score , thus making the dynamics of monogamy score just negative of .+ _ _ + the dynamics of mixed w state subjected to dephasing noise is as follows : we noticed that for , has a slower decay rate compared with other purity values and hence a finite amount of is present for all 10 at [ fig.11 ] .the decay of is asymptotic . for certain values of purity ,the initial mixed w state is monogamous .however , they enter into the polygamous regime as a consequence of phase damping noise [ fig.12(a ) ] .after sufficient time , decays down to zero for all purity values . __ + the dynamics of under the influence of phase damping channel is given by : here , exhibits a strong revival all throughout the channel . however , the decay profile of is perfectly asymptotic [ fig.13 ] .prior to channel action , i.e. at , all density matrices are polygamous . with the action of the dephasing channel , density matrices with mixed ness closer to 1 enter into the monogamous regime [ fig.12(b ) ] .+ _ _ + for the initial state , ,the dynamics is given as : both and are zero throughout the channel operation time and do not show any revival .in the final subsection of this section , we consider the effect of the depolarizing channel on three - qubit states . under the action of a depolarizing channel , the initial single qubit density matrix dynamically evolves into a completely mixed state /2 .the kraus operators representing depolarizing channel action are = , = , = , = .( where are pauli matrices ) + _ _ + the dynamics of a mixed ghz state when subjected to depolarizing channel is spelled out as , ,\rho_{18}=\frac{p}{2}(1-\gamma)^3,{}\nonumber\\ & & \rho_{ii}=\frac{1}{8}[1-p(1-\gamma)^2 ] i = 2, ... ,7.\end{aligned}\ ] ] both and start decaying from the initial values of ( -3.00,1.00 ) [ fig.14 ] . quite interestingly , exhibits smooth asymptotic decay in contrary to the anomalies observed in case of gad channel and dephasing channel .this instance underlines the fact that a certain noisy environment can largely influence the dynamics of multipartite quantum correlation .the depolarizing channel transfers the initial mixed ghz state into /8 which contains zero quantum dissension . herethe monogamy score of mixed ghz state is just the negative of . __ + the dynamics of the mixed w state under the action of depolarizing channel is given by , ,{}\nonumber\\ & & \rho_{22}=\rho_{33}=\rho_{55}=\frac{1}{24}[3+p(1-\gamma)(3\gamma^2 - 7\gamma+5)],{}\nonumber\\ & & \rho_{23}=\rho_{25}=\rho_{35}=\frac{p}{6}(2-\gamma)(1-\gamma)^2,{}\nonumber\\ & & \rho_{44}=\rho_{66}=\rho_{77}=\frac{1}{24}[3-p(1-\gamma)(3\gamma^2 - 5\gamma+3)],{}\nonumber\\ & & \rho_{46}=\rho_{47}=\rho_{67}=\frac{p}{6}\gamma(1-\gamma)^2,{}\nonumber\\ & & \rho_{88}=\frac{1}{8}[1+p(1-\gamma)(\gamma^2-\gamma-1)].\end{aligned}\ ] ] here , and attain maximum values of ( -1.75,0.92 ) at and [ fig.15 ] .the initial mixed w state evolves to /8 in the limit of resulting in zero quantum dissension . follows a perfect asymptotic path in contrast to the dynamics observed in case of gad channel and dephasing channel .the monogamy score evolves as shown in fig.16 . for high purity values closer to 1 ,the initially polygamous states enter into monogamous regime owing to depolarizing channel action . on the other hand ,states with low purity values which are initially monogamous do not experience any such transition .in this work , we have extensively studied the dynamics of quantum correlation ( quantum dissension ) of various three qubit states like , mixed ghz , mixed w , mixture of separable states and a mixed biseparable state when these states are transferred through quantum noisy channels such as amplitude damping , dephasing and depolarizing . in most cases ,we find that there is an asymptotic decay of quantum dissension with time . however , in certain cases , we have observed the revival of quantum correlation depending upon the nature of initial state as well as channel .this is quite interesting as we can explicitly see enhancement of multiqubit correlation in presence of local noise ; similar in the line of quantum discord .+ in addition , we have studied dynamics of monogamy score of three qubit states under different quantum noisy channels .remarkably , we have seen that there are certain states which on undergoing effects of quantum channels change itself from monogamous to polygamous states .it is believed that monogamy property of the state is a strong signature of quantumness of the state and can be more useful security purpose compared to polygamous state .this study is useful from a futuristic perspective where we are required to create monogamous state from polygamous state for various cryptographic protocols .+ _ acknowledgment _ authors acknowledge prof a. k. pati of harish chandra research institute for his invaluable suggestions in the improvement of the work .this work was initiated when the first author was at indian institute of science .a. einstein , b. podolsky and n. rosen , phys ., * 47 * 777 ( 1935 ) ; j.s .bell physics * 1 * 3 , 195 , 1964 ; a.aspectet.al .lett . , * 47 * , 460 , ( 1993 ) ; c.monroe .lett . , * 75 * , 4714 , 1995 ; r. laflamme.et.al.arxiv:quant-ph/9709025v1 .l.dicarlo . et .al.nature * 467 * , 574 , 2010 ; p. neumann .science * 320 * , 1326 , 2008 ; w.b.gao.et.al .nature phys * 6 * , 331 , 2010 .n.gisin .et .phys . , 74 , 145 , 2002.a .ekert , phys .* 67 * , 661 ( 1991 ) ; c. h. bennett and g. brassard , proceedings of ieee international conference on computers , system and signal processing , bangalore , india , pp.175 - 179 ( 1984 ) ; p. w. shor and j. preskill , phys . rev . lett . * 85 * , 441 ( 2000 ) .m. hillery , v. buzek and a. berthiaume , phys .a * 59 * , 1829 ( 1999 ) ; s. k. sazim , i. chakrabarty , c. vanarasa and k. srinathan , arxiv:1311.5378 ; s. adhikari , i. chakrabarty and p. agrawal , quant .inf . comp .* 12 * , 0253 ( 2012 ) ; m.ray , s. chatterjee and i. chakrabarty , arxiv:1402.2383 .s. k. sazim and i. chakrabarty , eur .j. d * 67 * , 174 ( 2013 ) ; d. deutsch , a. ekert , r. jozsa , c. macchiavello , s. popescu and a. sanpera , phys .77 * , 2818 ( 1996 ) ; l. goldenberg and l.vaidman , phys .lett . * 75 * , 1239 ( 1995 ) ; a. cabello , phys . rev .a * 61 * , 052312 ( 2000 ) ; c. li , h - s song and l. zhou , journal of optics b : quantum semiclass . opt . * 5 * , 155 ( 2003 ) ; a.k.pati phys .a , 63 , 014320 , 2001 ; s. adhikari and b. s. choudhury , phys .a * 74 * , 032323 ( 2006 ) ; i. chakrabarty and b. s. choudhary arxiv:0804.2568 ; i. chakrabarty , int . j. quant . inf .* 7 * , 559 ( 2009 ) ; s. adhikari , a.s .majumdar and n. nayak , phys .a * 77 * , 042301 ( 2008 ) ; i. ghiu , phys . rev . a * 67 * , 012323 ( 2003 ) ; s. adhikari , i. chakrabarty and b. s. choudhary , j. phys .a * 39 * , 8439 ( 2006 ) ; v. buzek , v. vedral , m. b. plenio , p. l. knight and m. hillery , phys .a * 55 * , 3327 ( 1997 ) ; a. orieux , g. ferranti , a. darrigo , r. lo franco , g. benenti , e. paladino , g. falci , f. sciarrino , and p. mataloni , arxiv:1410.3678 ; a. darrigo , r. lo franco , g. benenti , e. paladino , and g. falci , ann . phys .* 350 * , 211224 ( 2014 ) .e. knill and r. laflamme , phys .* 81 * , 5672 ( 1998 ) ; a. datta et al .lett . * 100 * , 050502 ( 2008 ) ; s.l .braunstein et al . , phys .lett . * 83 * , 1054 ( 1999 ) ; d.a .meyer , ibid .* 85 * , 2014 ( 2000 ) ; s.l .braunstein and a.k .pati , quant .inf . comp .* 2 * , 399 ( 2002 ) ; a. datta et al . , phys .a * 72 * , 042316 ( 2005 ) ; a. datta and g. vidal , ibid .* 75 * , 042310 ( 2007 ) ; b.p .lanyon et al . , phys .* 101 * , 200501 ( 2008 ) .h. ollivier , w. h. zurek , phys .lett . * 88 * , 017901 ( 2002 ) ; l. henderson , v. vedral , j. phys .a * 34 * , 6899 ( 2001 ) ; s. luo , phys . rev .a * 77 * , 042303 ( 2008 ) ; a.r .usha devi , a. k. rajagopal , phys .lett . * 100 * , 140502 ( 2008 ) ; a. datta , a. shaji , c. m. caves , phys . rev. lett . * 100 * , 050502 ( 2008 ) ; k. modi _ et .lett . * 104 * , 080501 ( 2010 ) ; m. horodecki , p. horodecki , r. horodecki , j. oppenheim , a. sen(de ) , u. sen , b. synak - radtke , phys . rev . a * 71 * , 062307 ( 2005 ) .i. chakrabarty , p. agrawal , a. k. pati , eur .j. d * 57 * , 265 ( 2010 ) ; k. modi et al .104 , 080501 ( 2010);m .okrasa and z. walczak , arxiv:1101.6057 ; c. c. rulli . et .al . phys .rev . a , * 84 * , 042109 , 2011 . ;ma and z .- h .chen , arxiv:1108.4323 .al.phys . rev .a , * 80 * , 024103 , 2009 ; i. chakrabarty , s. banerjee and n. siddharth , quant .11 * , 0541 ( 2011 ) ; j .- s .xu , k. sun , c .- f .li , x .- y .xu , g .- c .guo , e. andersson , r. lo franco and g. compagno , nature commun . * 4 * , 2851 ( 2013 ) ; b. bellomo , g. compagno , r. lo franco , a. ridolfo , s. savasta , int . j. quant . inf .* 9 * , 1665 ( 2011 ) ; r. lo franco , b. bellomo , s. maniscalco , and g. compagno , int .b * 27 * , 1345053 ( 2013 ) ; r. lo franco , b. bellomo , e. andersson , and g. compagno , phys .a * 85 * , 032318 ( 2012 ) .t.m.cover and j. a. thomas john wiley & sons ., 1991 .lett . , * 105 * , 190502 , 2010 ; k. modi , a. brodutch , h. cable , t. paterek and v. vedral , rev .phys . * 84 * , 1655 ( 2012 ) ; j - s .zhang and a - x .chen , quant .. lett . * 1 * , 69 ( 2012 ) ; d. girolami and g. adesso , phys .a * 83 * , 052108 ( 2011 ) ; v. vedral , m. plenio , m. rippin , and p. l. knight , phys .lett . * 78 * , 2275 ( 1997 ) ; t. r. bromley , m. cianciaruso , r. lo franco , and g. adesso , j. phys . a : math .47 * , 405302 ( 2014 ) ; b. aaronson , r. lo franco , g. compagno , and g. adesso , new j.phys .* 15 * , 093022 ( 2013 ) ; b. aaronson , r. lo franco , and g. adesso , phys . rev .a * 88 * , 012120 ( 2013 ) ; m. cianciaruso , t. r. bromley , w. roga , r. lo franco , and g. adesso , arxiv:1411.2978 ( 2014 ) . | we study the dynamics of quantum dissension for three qubit states in various dissipative channels such as amplitude damping , dephasing and depolarizing . our study is solely based on markovian environments where quantum channels are without memory and each qubit is coupled to its own environment . we start with mixed ghz , mixed w , mixture of separable states , a mixed biseparable state , as the initial states and mostly observe that the decay of quantum dissension is asymptotic in contrast to sudden death of quantum entanglement in similar environments . this is a clear indication of the fact that quantum correlation in general is more robust against the effect of noise . however , for a given class of initial mixed states we find a temporary leap in quantum dissension for a certain interval of time . more precisely , we observe the revival of quantum correlation to happen for certain time period . this signifies that the measure of quantum correlation such as quantum discord , quantum dissension , defined from the information theoretic perspective is different from the correlation defined from the entanglement - separability paradigm and can increase under the effect of the local noise . we also study the effects of these channels on the monogamy score of each of these initial states . interestingly , we find that for certain class of states and channels , there is change from negative values to positive values of the monogamy score with classical randomness as well as with time . this gives us an important insight in obtaining states which are freely sharable ( polygamous state ) from the states which are not freely sharable ( monogamous ) . this is indeed a remarkable feature , as we can create monogamous states from polygamous states monogamous states are considered to have more signatures of quantum ness and can be used for security purpose . |
pair interactions , simd , gpu , molecular dynamics , verlet listin most particle simulations , more than half of the computational time is spent in calculating pair interactions with limited spatial range .when long - range interactions are present , such as electrostatics , the long - range part is usually calculated on a mesh .certain types of analysis , such as determining particle pair correlation functions , also involve evaluating pair interactions with limited range .many codes that compute these kind of interactions employ cpu algorithms consisting of a simple double loop to iterate through a list of particle pairs .this nave approach has a quadratic computational complexity which makes it prohibitively expensive already for moderate numbers of particles .however , by exploiting the limited interaction range imposed by the typically spherical cut - off , the computational cost can be reduced to linear .this is achieved by reducing the number of neighboring particles that need to be considered .to do so the verlet list and the linked cell algorithms as well as the combination of the two are widely used . in particular , in molecular dynamics ( md ) simulation codes these algorithms are most commonly employed .although these algorithms suffer from limitations on modern simd architectures , there have been only a few attempts to overcome them , most of them specific to gpus without achieving generality . before the advent of cpu simd units , the performance of the simple double loop over the neighbor list was quite good as the compiler can usually unroll the inner loop . because the speed of the main memory has not kept up with the processor speed ,caching became more important . in calculating pair interactionsthis means that the location of particles in memory should correlate with their spatial location to increase cache hits .several publications have dealt with this issue . however , as the width of the simd units increases , reordering or shuffling the input and output data for convenient access in the simd units becomes a severe bottleneck . when calculating pair interactions between all particle pairs in the system , a perfectly linear memory access pattern can be used that avoids shuffling . however ,when a cut - off is used , a significant part of the particle neighbor list will not be ordered sequentially .the relative cost of shuffling depends on the cost of calculating a single pair interaction and on the simd width . in molecular dynamics simulations particlesusually interact via a lennard - jones ( lj ) and a coulomb potential .when the popular particle - mesh ewald ( pme ) electrostatics method is used , a complementary error function must be calculated .pennycook et al . provide a detailed analysis of the shuffling ( also called gather - scatter ) and their impact on performance with only lj interactions considered . in their work , with 8-way simd reordering instructions represent a third of the total , with 16-way simd the ratio is more than a half . in practice , the performance is affected even more . since shuffling introduces more data dependencies between instructions , reducing the instructions available for scheduling will result in low instructions per cycle ( ipc ) .we will show that even when calculating lj and pme interactions , the shuffling ends up taking more than half of the time with 4-way simd .on gpus , shuffling data is typically not required as the execution model allows hardware threads to access data from different memory locations . however , loading particle data requires scattered memory access which will waste gpu memory bandwidth as well as cycles ( due to instruction replay ) and will render a standard implementation memory bound . moreover, the throughput - oriented gpu architecture requires high level of parallelism and is sensitive to memory access patterns . in order to target gpus, some codes combine the traditional algorithms with data regularization techniques , but such approaches can still lead to inefficient execution . recasting the algorithms to a more regular data access has been shown to result in higher ipc on gpus , but not without additional trade - offs .although on cpus the relative memory bandwidth is higher , the data dependencies can still cause bottlenecks in simd - optimized algorithms .the main issues faced when considering data parallelization in traditional particle - pair based neighbor - lists schemes are the irregular sizes and non - contiguous nature of the neighbor lists of each particle .we propose to address both of these issues by considering pair - interactions between clusters of particles of fixed size , similar to the work of friedrichs et al .however , important distinguishing features of our algorithm are high parallel work - efficiency and the inherent flexibility which enables tuning for the simd width and other specifics of the hardware . by changing the size of the clusters, our algorithm can be adapted to simd units of different widths .adjusting the cluster size also allows tuning the number of operations `` in flight '' as well as the ratio of arithmetic to memory operations .this flexibility , together with the high ratio of arithmetic to load / store operations , ensures that the algorithm can reach high performance on current , as well as future cpu and gpu hardware .it is also well suited to more exotic hardware such as fpgas , but as the implementation is still ongoing , result will be reported in the future . in case of cpus , the additional major advantage is that , by matching the cluster size to the simd width , no shuffle operations are required at all .this not only improves performance by at least a factor of 2 , but also makes the code much easier to write and read .there is a price to pay for the improvements as the cluster pairs will contain particle pairs in addition to the ones in the original interaction sphere .this results in extra interactions calculated between particles otherwise not within range , which we know will evaluate to zero . as we will show later ,although this does lead to reduction in algorithmic work - efficiency , the performance gain still outweighs the extra cost .we would like to note that the algorithm operates on the lowest level of the interaction calculation and any optimization available in the literature can be applied . for md ,we use it together with a verlet buffer .furthermore , all parallelization strategies developed for traditional algorithms can be used with little or no modification .we have designed and implemented non - bonded pair interaction kernels for x86 sse2 , sse4.1 , avx and avx+fma ( amd bulldozer ) simd architectures , as well as nvidia gpus .the kernels utilize lj interactions and monopole - monopole electrostatic interactions of general form .we implemented analytical electrostatics kernels for reaction - field ( rf ) and pme , as well as tabulated electrostatic potentials .we plan to support sphero - symmetric potential of arbitrary shape through tabulated interactions .while the required additional table lookups per pair will lower the efficiency of the kernels on current cpus , on gpus and with avx2 ( which will support table lookups ) performance should be good .the algorithms described here have been implemented in the gromacs molecular simulation package and are available in the official version 4.6 release , combined with hybrid mpi+openmp parallelization .the source code can be obtained under the lgplv2 license from http://www.gromacs.org .note that the cpu kernels in gromacs 4.6 have an additional optimization , not discussed in this paper , for systems where less than half of the particles have lj interactions . for water thisimproves kernel performance by up to 10% .we are looking for an algorithm that can execute single instructions on multiple data ( simd ) , while not being limited by loading and storing data from and to ( cache-)memory .the standard implementation of the verlet - list algorithm loads a particle and calculates pair interactions by looping over its neighbors . thus a single pair interaction is calculated for each particle load and store .the relatively cheap interactions in md simulations render this algorithm effectively memory bound .to remedy this , our algorithm loads a cluster of particles and calculate interactions for each neighbor loaded .this increases the data reuse by a factor of .the loop over neighboring particles is replaced by a loop over clusters consisting of particles .the values of and will be tuned for the simd hardware .the standard implementation of the verlet - list algorithm can be seen as a special case of this cluster algorithm where =1 and =1 . in general ,the easiest way to achieve simd parallelization is to let the compiler vectorize loops , possibly with the help of the programmer aided by feedback from the compiler . at a first glancethis might seem to be a good strategy since a particle usually has hundreds of neighbors which leads to long vectorizable loops .for efficient loading , the order of particles in memory needs to be strongly correlated with spatial ordering to increase cache hits .ideally , sequential particles would be loaded in groups of size equal to the simd width , but this not compatible with a spherical interaction volume . even when particles can be loaded in groups , vectorizing the inner - loopwill only give a small speed - up on wider simd units , as memory operations and data shuffling can take more time than the actual calculation .for lj only with fixed parameters on avx 8-way simd , memory and shuffling operations account for 32 of the 70 operations ; with parameter loading , the ratio increases beyond 50% . when calculating all interactions of neighbors with one particle , we need to load 3 coordinate components , 3 parameters , as well as load and store 3 force components for each neighbor . in theory , on current cpus this should not lead to a memory - bound algorithm , but in practice performance will be far from peak due to limitations on the instruction scheduling .the coordinates are loaded per particle as triplets of , , requiring data - shuffling .the wider the cpu simd unit is , the more data shuffling is required and the longer the dependency chain gets between loading data , computation and storing forces .hence , for efficient simd calculations it is very advantageous to use packed sequences of coordinates , e.g , and with 4-way simd .on gpus , such packing is not needed as vector types are supported , but a much higher arithmetic to memory operation ratio is required to achieve peak performance . constructing the neighbor list , also called pair list , is a similar operation , but with less arithmetic , which makes it even more memory intensive .although the pair list is usually not reconstructed every step , it involves looping over more pairs than the non - bonded kernel processes , so this can become a limiting factor . the only way to hide the latency of memory operations is to perform more calculation per load / store operation . at first sightthis might seem impossible , but this can actually be achieved with a simple scheme .the basic idea behind our work is to spatially cluster particles in groups of fixed size and use such a cluster as the computational unit of our algorithm .these groups can then be mapped directly to the simd hardware units , which have a fixed width .given a 4-way simd unit , we can spatially cluster particles in groups of 4 .we can load a cluster of 4 , so called , -particles in simd registers and then loop over the neighboring clusters of 4 , so called , -particles ( see fig .[ simd ] ) . with this = 4 setup ,we compute 16 pair interactions while only performing memory load and store operations for 4 -particles . after having looped over all neighboring -clusters of an -cluster , usually a few hundred , we also have to do memory operations for the -particles , but the cost of this is negligible . in this examplethe memory bandwidth is reduced by a factor of 4 , but more importantly , as we always access particles in cluster of size 4 , we can organize all data packed in groups of 4 . this eliminates the need for data shuffling which is the main performance bottleneck of the standard way of calculating non - bonded interactions on simd units .this is the simplest version of the algorithm .the same 4 clusters can also be processed on 8-way simd hardware .then two i - clusters are loaded in one simd register and each j - cluster is duplicated in one simd register .this setup halves the number of arithmetic operations and adds a few shuffle operations . in cuda ,memory access is more flexible .hardware threads on nvidia gpus are organized in `` warps '' . on current gpus ,each warp consists of 32 threads which execute the same instruction every cycle .this results in a simd - like execution model called single instruction multiple threads ( simt ) . unlike the simd model which requires explicit programming for the simd width, the simt architecture allows thread - level parallel programming , and the warp - based lockstep execution model needs to be considered only for performance .this enables more flexible memory operations ( different addresses in different threads ) and divergence among threads in a warp .the simt model allows spreading out all particle pairs in an 8 cluster pair over the 32 threads of a warp , thus processing one particle pair on each thread .we illustrate this in fig .[ simd ] , for the sake of example using 16-way simt . 1 and the 4 setups with 4-way simd and 16-way simt .all numbers are particle indices , each black dot represents an interaction calculation and the arrows indicate the computational flow .the simd registers for - and -particles are shown in green and blue , respectively .the 4 setup calculates 4 times as many interactions per particle load / store and requires fewer memory operations ( shown in red ) . unlike the 1 setup , the 4 setup does not require data shuffling in registers .[ simd],width=302 ] we will now describe in detail how the algorithm works , starting with building the cluster pair list . the algorithmic unit of particle data representation is a cluster rather than a single particle . beside thisminor , but important difference , the overall algorithm closely follows the standard verlet or neighbor list setup .hence , in the following , unless explicitly stated , pair list will refer to a list of cluster pairs .note that the cluster pair list this work uses as data representation does not define a strict particle - particle in - range relationship because , as we will show later , the list by design includes particles not in - range .moreover , the presented algorithms use newton s third law to calculate pair interactions , hence the pair list contains each pair only once , not twice .since for each particle there is no explicit list of all particles in its neighborhood , we prefer the term `` pair list '' to the term `` neighbor list '' .we construct a pair list using a verlet buffer ( also called `` skin '' ) which is essentially an extension of the cut - off distance to account for particle movement allowing the list to be retained for a number of steps .the exact number depends on the relative cost of the list construction and the dependence of the buffer size on the lifetime of the list .pair interactions are then determined for the fixed list of particle pairs defined by pairs of clusters . in the most general case ,we need to generate a pair list of clusters of size particles versus clusters of size . in the simplest setup, the simd width will be equal to , but a width of , where is divisible by , will also work . on a gputhe best performance will be achieved when matching to the width of the simt execution model , i.e. 32 for cuda .first we need to group the particles into clusters of fixed size . to minimize the number of additional particle pairs in the pair list , the clusters need to be as compact as possible .a simple and efficient way of generating compact , fixed - size clusters is spatial gridding in two dimensions and spatial binning in the third dimension , see fig .[ clustering ] .first we construct a rectangular grid along x and y with a grid spacing of , where is the particle density .then we sort the particles into columns of this grid . for each columnwe sort the particles on z - coordinate and as a result we get the spatial clusters as consecutive groups of or particles . because the number of particles in a column is typically not a multiple of , we add dummy particles to the last cluster when needed .the fraction of dummy particles is ; with 10000 particles and clusters of size 8 this gives 4% dummy particles in the cpu algorithm .for the gpu we use a hierarchical cluster setup . as we can store 8 i - clusters in shared memory , we group 2=8 clusters of size 8 together .this reduces the number of dummy particles to 1% with 10000 particles .all these operations can be done efficiently in linear time .the next step is calculating bounding boxes for each cluster , this can be done using simd instructions , as the number of particles in a cluster is constant . in the case of ,adjacent pairs of bounding boxes are combined to generate clusters of double the number of particles .a pair list can then be constructed by checking distances between the bounding boxes .this is very efficient , as it requires one bounding box - pair distance check for particle pairs .however , this results in more cluster or particle pairs than strictly necessary , as bounding boxes might be within range while none of the particle pairs falls within range . to avoid this overhead we prune pairs of clusters at distances close to the cut - off using a particle - pair distance criterion . for the gpu implementation, the pair list construction is performed on the cpu , but the pruning is done on the gpu where this can be done more efficiently .periodic boundary conditions can be implemented in a simple and efficient fashion by moving the -clusters by the required periodic image shifts and storing these shifts in the cluster pair list for use during the pair interaction calculation .-cluster list in green for the red -cluster .[ clustering],width=226 ] .... / * bb = cluster bounding box * / for each i - cluster ci determine grid range in x within rlist of bb[ci ] for each grid cell gx in range determine grid range in y within rlist of bb[ci ] for each grid cell gy in range determine j - clusters at gx , gy within rlist of bb[ci ] for each j - cluster cj in range if ( bbdistance(ci , cj ) < rlist ) if ( bbdistance(ci , cj ) < rbb or atomdistance(ci , cj ) < rlist ) put cj in cjlist[ci ] set forcefield exclusion masks in cjlist[ci ] .... at this point we would like to note that the cluster pair list , being simply a verlet list of particle clusters , can be seen as a generalized version of the classical neighbor list .consequently , the neighbor list corresponds to the special case of cluster size and in the following we will refer to it as the 1 scheme .the cluster - based pair list contains inherently more particle pairs than the ones within the cut - off radius , the number of pairs for different is shown on fig .[ zeros ] .the fraction of extra interactions increases rapidly with the cluster size and decreases rapidly with the cut - off radius .however , the increase in the efficiency of the presented algorithm should outweigh the cost of calculating these extra interactions .as can be seen in fig .[ zeros ] , when it comes to the number of extra pairs , lists with are less favorable than which results in clusters with close to cubic shape .this shape minimizes the number intersecting cut - off spheres , resulting in a more compact list .pair lists normalized by the average number of pairs in a sphere of radius , as a function of the pair list radius , for a 3-site water model , number density =100 nm .[ zeros],width=283 ] having constructed the pair list , for each cluster we now have the lists of all clusters in range .we still need to take care of particle pairs that need to be excluded .there are three types of exclusions .two of those occur within cluster self - pairs . hereparticle pairs occur twice , whereas we should only calculate them once and there are self interactions .we want to calculate each pair interaction only once and skip the self interactions .these two types of exclusions are handled in the pair interaction kernel .additionally , there can be exclusions defined by the force field .normal lj and electrostatic interactions should not be calculated for such excluded pairs of particles , whereas the rf or pme correction should still be applied . to treat these exclusions in the non - bonded kernels , we encode them in a bitmask stored per cluster pair in the pair list .this compact representation saves memory and also allows for easy and fast decoding of exclusions using bitwise operations .on cpus we sort the pair list according to the presence of exclusions so we only need to mask exclusions when really needed .this improves performance by 15% .the total computational cost of the pair list construction is proportional to the number of particles . to understand how the total cost scales with different parameters , it is worth looking into the details of the different tasks involved .computational cost in terms of cycles as a function of pairs per particle is shown in fig .[ cycles_npair ] . for the implementation of the sorting ,after the gridding , we assume that the particle distribution is homogeneous on longer length scales , but other suitable sorting techniques can be applied in the inhomogeneous case .then a simple pigeonhole sorting is used , which scales linearly and provides good performance .the cost of the pair search is not proportional to the number of pairs , as only the boundary of the interaction sphere needs to be determined .this cost is high when the number of pairs is small compared to and and it is proportional to the radius squared for large radius .when the radius is large , the cost of search decreases proportionally with .this makes the search far more efficient than a particle - pair based search .another implication of the much lower number of ( now cluster- ) pairs , is that advanced search algorithms , such as interaction sorting , will not help and a simple search algorithm performs well . finally , the cost of interaction calculation is proportional to the total number of pair interactions calculated .but the overhead of pairs beyond the cut - off distance decreases with increasing number of pairs .the cost of the search and lj+pme force calculation on cpus are similar , as can been seen in fig .[ cycles_npair ] ; rf kernels are about twice as fast , which makes the search relatively more expensive .the optimal balance between search cost and extra cost due to the verlet buffer is usually achieved with a pair list update interval of 10 and 20 when only using a cpu . on the gpu ,the interaction throughput is much higher which makes the cpu search relatively more expensive . as mentioned , before , we do most of the cluster - pair pruning on the gpu , which reduces the cpu search cost significantly , as can been seen in fig .[ cycles_npair ] . depending on the speed of the cpu versus the gpu , and especially the number of cores in each ,the optimal pair list update interval is between 10 and 50 .furthermore , as the search algorithm maps well to gpus , we plan to port it in the near future ., for 8 the search cost is also shown for checking bounding box distances only .the pair count is within the spherical volume , not including the extra pairs due to the irregular cluster - pair volume .note that the wiggles on the curves for searching are caused by jumps in the number of grid cells fitting in a cut - off sphere .all timings were done on an intel sandy bridge cpu with single precision 256-bit avx kernels using a single thread .[ cycles_npair],width=283 ] when the pair list needs to be updated for every interaction calculation , the particle - pair distance based pruning should be skipped and replaced by a conditional in the interaction kernels . with very cheap interactions , such as for a pair correlation function calculation ,no conditional should be used at all .as in molecular simulations usually more than half of the computational time is spent in the calculating non - bonded pair interactions , it is well worth carefully optimizing these kernels .we now have a list of cluster pairs of versus particles . as can be seen in fig .[ code_kernel ] , writing a simd kernel for this setup is rather straightforward .however , achieving optimal performance is not trivial .it is often very hard to judge how close the kernel performance is to the maximum achievable performance , as it depends both on hardware characteristics , mainly the type of simd unit and the performance of the cache system and load / store units , as well as on software characteristics , mainly the compiler(s ) used ..... for each ci cluster load m coords+params for ci for each cj cluster load n coords+params for cj / * these loops are unrolled using simd * / for j=0 to m for i=0 to n calculate interaction ci*m+i with cj*n+j store n cj - forces store m ci - forces .... for cpus we chose to write the kernels in c with extensive use of sse and avx simd - intrinsics as the current gnu and intel compilers do a good job at optimizing such code and typically achieve better performance across multiple architectures than equivalent hand - written assembly .for gpus we chose to concentrate on the nvidia cuda programming model as the available development tools are more mature and provide higher performance than that of the alternatives .there are two main factors that affect kernel performance and require special attention .one is the choice of and , the other is the treatment of the exclusion and cut - off checks . for the latter the options are using conditionals , which should be avoided on cpus , or masking interactions using bitmasks .masking is usually more efficient than conditionals .on cpus simd bitwise and operations are used for masking , whereas on gpus we simply multiply by 0 or 1 and use a conditional for the cut - off check . using a conditional can reduce the number of instructions issued when all pairs stored in a simd register are beyond the cut - off distance . on the cpu this should only be used when all pairs are beyond the cut - off , as otherwise the force reduction cost increases .this only improves the performance when an overly long pair list buffer is used , so we only use a conditional for the 1 kernels where it helps in most cases . with the latest cpu compilers ,not much code optimization is required , as long as the fastest possible intrinsic is used for the respective instruction set , e.g. sse2 , sse4.1 or avx . in cuda optimization is less straightforward ; as the architecture is changing rapidly , compilers and drivers are less mature .additionally , gpus are massively parallel processors with more simple cores than cpus , which puts more burden on the programmer and compiler to pick the right optimization which might not even carry across hardware generations .the main goal is to keep the computational units as busy as possible by avoiding stalls due to dependencies on memory operations or instruction latencies .the ratio of compute to memory operations scales with , as for each loaded -particle , interactions with -particles are calculated in a single inner - loop iteration . on the cpu it turns out that the best performance is achieved for =4 . while using 2 -particlesis also possible , with 4 there seem to be enough arithmetic instructions fed to the scheduler to hide most memory operations .using will lead to a marginally higher ipc and flop rate , at the expense of calculating many more zeros .the choice of depends on the the simd width . here , for cpus, we only consider value of equal to the full or half simd width . fitting two -clusters in a simd - widthwill simply halve the number of arithmetic operations and add a few shuffle operations .the instruction count is largely independent of .the only exception is the table lookup for tabulated electrostatics , the number of which scales with both and .the precision , either single or double , also does nt affect the instruction count , except for the inverse square root operation , which needs an extra newton - raphson iteration in double precision .an issue specific to cpu simd kernels is that lj pair parameter look - up is costly , as one simd load operation is required for each particle pair . with geometric or lorentz - berthelot combination rules only two loads are required per cluster pair , latency of which can be hidden with computation .the cuda kernels turn out to be instruction latency limited , not memory limited , although this requires some tricks and a tight packing of the pair list in memory .the pseudocode of the kernel is shown in fig .[ code_kernel_gpu ] .the inner loop calculates interactions between clusters of =8 and =4 , this way 32 threads of a warp calculate a pair interaction of an entire cluster pair simultaneously .we chose smaller than such that we have more computation per memory operation .we group cluster pairs two by two in the pair list , hence the pair search can be done on clusters of 8 particles and the computation on two 8 clusters independently on two warps . additionally , we store clusters in range for 8 -clusters in a single pair list with this further improving the data reuse .a -cluster interacts with half of these 8 -clusters on average , which additionally reduces the memory pressure by a factor of 4 . as the pseudocode on fig .[ code_kernel_gpu ] shows , non - interacting -clusters are skipped based on an bitmask - encoded cluster interaction mask , which only causes a minor overhead .this hierarchical grouping requires minor modifications in the pair - search code , only storing the packed exclusions masks becomes more complex . with this setup we can load the coordinates , atom types , and charges for 64 -particles in registers on the gpu and thereby maximize the number of calculations per -particle load .two warps in a cuda thread block operate on a group of 8 -clusters and their -cluster neighbors . as the two warps by definition access different -particles, they can run independently and no synchronization is required during computing . on the fermi architecture partial forces are accumulated in registers and reduced in shared memory . in contrast , the kepler architecture provides a special `` warp - shuffle '' operation which can be used for efficient synchronization - free warp - level reduction .after a lot of testing and optimization , the cuda kernels turned out to be compact and more readable than the cpu simd kernels .more code is required for managing gpu device initialization , kernel launches and transfers between cpu and gpu . ..../ * each of the mxn i - j pairs is assigned to a thread .the sci i - supercluster consists of 8 ci clusters . * / sci = thread block index for each ci in sci load i - atom data into shared mem . / * loop over all cj in range of any ci in sci * / for each cj cluster load j - i cluster interaction and exclusion mask / * per warp * / if cj not masked / * non - interacting cj - sci * / load j - atom data / * loop over the 8 i - clusters * / for each ci cluster in sci if cj not masked / * non - interacting cj - ci * / load i atom data from shared mem .r2 = sqrt(|xj - xi| ) extract excl_bit exclusion / interaction bit for j - i pair if ( ( r2 < rc_squared ) * excl_bit ) calculate i - j coulomb and lj forces accumulate i- and j - forces in registers reduce j - forces reduce i - forces .... calculating energies is only required infrequently in molecular dynamics , therefore we will concentrate on force - only kernels .the lennard - jones force is : where is 0 for excluded particle pairs and 1 otherwise .the lj coefficients and can be different for each atom pair - . in practicethere is a limited number of atom types and often combination rules are used to obtain the parameters between two atom types . withx86 simd instructions loading arbitrary pair parameters can be costly due to the many load and shuffle operations required .using combination rules , either geometric or lorentz - berthelot , is more efficient .the electrostatic interaction form we consider is : where is the long - range correction force . for reaction - fieldelectrostatic we have with a constant .this can be evaluated efficiently analytically .for pme we have , with a constant .evaluation of the pme correction force is more costly .but as it is bounded and very smooth , linear table interpolation can reach full single precision with a limited table size . as a second option we consider an analytical approximation using a quotient of two polynomials .this requires 24 multiplications and additions and one division to reach full single precision .fused multiply - add ( fma ) instructions , currently available on gpus and the amd bulldozer microarchitecture , can speed up this polynomial evaluation significantly .achieving good performance of load and store intensive kernels requires detailed understanding of many low - level software optimization aspects : simd instruction set , throughput and latency of instructions on different processor microarchitectures , cache behavior , as well as experience with compiler - related performance issues .unfortunately , it takes a lot of time and effort to reach optimal performance .fortunately , this effort is required infrequently and our results can be used by anyone , as our compute - kernels are released as part of an open source project , freely available for anyone to use . to compare the different variants of the algorithm, we focus on the intel sandy bridge cpu architecture .the reason for this is that at the time of writing this architecture supports avx , the newest and widest simd instruction set on the x86 platform , and it also provides 256-bit operations .this allows a direct comparison between the 4 and 4 setup , as well as between 128- and 256-bit avx . for comparison with other architectures, we also show results on the amd bulldozer architecture using 128-bit avx and fma instructions as well as nvidia fermi ( gf100 ) and kepler2 ( gk110 ) gpus . as all cpu architectures in focus supportthe avx instruction set , form here on 128- or 256-bit simd will refer to avx instructions of the respective type .we report all performance data in cycles which depends only on the microarchitecture , but not the exact cpu or gpu model .all cpu kernels were compiled with the gnu c compiler version 4.7.1 with ` -o3 ` as the only optimization .other optimization - related compiler options did not improve the non - bonded kernel performance .both intel c compiler versions 12.1.3 and 13.1.1 produced slightly slower code even with cpu architecture - specific optimizations enabled .the gnu c compiler has greatly improved with recent versions , the difference between version 4.5.3 and 4.7.1 on sandy bridge with analytical ewald kernels is 22% while with recent intel compilers even slight regressions have been observed .the cuda gpu kernel were compiled with the cuda compiler version 5.0.7 with ` -use_fast_math ` as well as the architecture - specific optimization options ` -arch = sm_20 ` and ` -arch = sm_30 ` for fermi and kepler2 gpus , respectively . on the cpu we store the properties of the i - particles in simd registers and loop over the list of clusters of j - particles .the pair interactions for the different i - particles are not interdependent , except that we want to load and store the j - particle properties only once .there are several choices to be made when transforming this algorithm into actual code . for instruction( re-)scheduling it is advantageous to write out the operations for the i - particles , so it is clear to the compiler that it can reorder them . for most kernels matches the simd width , but for the 256-bit flavorwe also consider =4 , which is half the simd width . on new architectures with wider simd units , such as intel mic with 16-way simd in single precision , having smaller than the simd width is even more important .performance of the most important flavors of the fully optimized kernel versions is reported in table [ kernperf_rf ] and table [ kernperf_ewald ] for rf and ewald , respectively .the metrics shown in table [ kernperf_rf ] and table [ kernperf_ewald ] represent the peak performance of the respective kernels .the performance of cpu kernels is constant in the regime of 100 - 100000 particles .in contrast , gpus are massively parallel multi - processors which require a high level of data - parallelism and hence many particle - pairs to reach peak performance .the cuda kernels are within 5% of the peak performance from around 20000 particles ; the scaling depends both on generation of architecture and number of multiprocessors .we present four different performance metrics .the the number of pairs calculated per 1000 compute cycles ( pairs / cycle ) is the only relevant measure for the raw performance of the algorithm .the instructions per cycle ( ipc ) provides an estimate of the hardware utilization .the last two are the number of floating point operations per pair ( flops / pair ) and per cycle ( flops / cycle ) , where we try to minimize the former and maximize the latter . as a reference we show performance for 1 kernels which fill the simd unit by unrolling the inner loop over .these kernels do not use lj combination rules , as parameters need to be looked up either way , which saves two floating point operations per pair .this standard way of employing simd results in low performance and low flop rates ( the theoretical peak rate for intel sandy bridge is 8 for 128-bit and 16 for 256-bit instructions , respectively ) .the high measured ipc indicates that the instructions are scheduled very efficiently .however , a large part of the instructions load , store and shuffle data , rather than doing computation .the 256-bit rf kernel is only 13% faster than the 128-bit variant while it has similar ipc .as both kernels execute the same arithmetic instructions , the observed rather small performance increase is explained by the overhead of shuffle and data load operations .we aim to address these bottlenecks with the proposed algorithms by reducing the need for shuffles and loads . in comparison to the work of pennycook , here ,the effect is much more pronounced as we need to load two lj parameters and a charge per particle , while they only implement an lj potential with fixed particle type . the large drop in performancewhen using a single thread shows that the 1 kernels are mainly limited by instruction scheduling and hyperthreading ( ht ) improves performance by offering the possibility of scheduling instructions from both threads running on the same physical core .in double precision with 128-bit avx we can use 2-way simd and we can compare the performance for small and .the 4 rf kernel is 26% faster than the 2 kernel , which outweighs the negative impact of zero interactions in most cases .additionally , the pair search for 2 takes significant time .this shows that is not a viable option and we therefore only consider or larger . in the 256-bit kernels we can use the 4 scheme which gives 50% higher performance than 4 and even more on a single thread .we continue with the single precision kernels for different functional forms . with 256-bit avx the rf kernels have a 3.3 times higher pair rate than 1 , for ewald this factor is 2.2 .this shows that our approach works .256-bit is 25% to 65% faster than 128-bit depending on the interaction type and the of use ht .the performance of the analytical ewald kernels is similar to that of the tabulated version with ht , even though the flop rate is very different . without htthe tabulated kernels get significantly slower because of the latencies involved in reading table entries .the amd bulldozer , in contrast with the simultaneous multi - threading intel ht implements , uses a cluster multi - threading architecture with much of the functional units , including simd units , shared between a pair of cores organized in a so called module .therefore , we compare performance of a hyperthreaded core on intel with a module on amd , both of which support two threads . even though bulldozer has double the theoretical throughput of 4-way simd instructions and fma gives another doubling of the theoretical flop rate , the performance is only marginally higher than 128-bit simd on sandy bridge .moreover , sandy bridge using 256-bit simd provides a 20% higher pair rate .the cuda gpu kernels provide significantly higher performance when comparing one streaming multiprocessor with one cpu core .the analytical and tabulated ewald kernels have similar performance , the former being slightly faster on the kepler architecture even though this kernel executes about 10% more instructions .this is explained by the fact that the additional instructions are mainly fma - s and intrinsics which allow higher instruction level parallelism , higher ipc and better absolute performance than the texture - based table loads .the analytical pme kernels achieve about half of the real - world peak flop rate , which is mainly because they do nt contain enough fma instructions .also , the presence of conditionals for checking the interaction of each of the 8 i - clusters in a super - cluster deteriorates the performance by 15% .ccccccccc precision & simd width & & pairs / kcycle & 1 thread & ipc & flops / pair & flops / cycle + single & 4 & 1 & 67 & % & 2.32 & 38 & 2.6 + single & 8 & 1 & 76 & % & 2.16 & 38 & 2.9 + + single & 4 & 4 & 175 & % & 2.36 & 40 & 7.0 + single & 8 & 4 & 223 & % & 1.96 & 40 & 8.9 + single & 8 & 4 & 248 & % & 1.68 & 40 & 9.9 + + double & 4 & 2 & 52 & % & 1.74 & 45 & 2.3 + double & 4 & 4 & 66 & % & 2.16 & 45 & 3.0 + double & 8 & 4 & 98 & % & 1.58 & 45 & 4.4 + cccccccccc pu & simd width & ewald & & pairs / kcycle & 1 thread & ipc & flops / pair & flops / cycle + sb & 4 & ana . &1 & 51 & % & 2.20 & 66 & 3.4 + sb & 8 & ana . & 1 & 63 & % & 1.98 & 66 & 4.2 + + sb & 4 & tab . & 4 & 111 & % & 2.42 & 43 & 4.8 + sb & 8 & tab . &4 & 147 & % & 2.26 & 43 & 6.3 + sb & 8 & tab . & 4 & 134 & % & 1.88 & 43 & 5.8 + sb & 4 & ana . & 4 & 110 & % & 2.40 & 68 & 7.5 + sb & 8 & ana . &4 & 139 & % & 1.76 & 68 & 9.5 + sb & 8 & ana . & 4 & 137 & + 1% & 1.52 & 68 & 9.3 + + bd & 4 & ana . &4 & 114 & % & 2.16 & 68 & 7.8 + + fermi & 32 & tab . &8 & 549 & & 1.66 & 41 & 24 + kepler2 & 32 & tab . &8 & 1130 & & 3.2 & 41 & 46 + kepler2 & 32 & ana . & 8 & 1151 & & 3.7 & 69 & 85 +the number of pair interactions calculated in a cycle reflects how a non - bonded algorithm performs on a certain hardware .however , this measure is not the best indicator of the effective performance , since both the buffer region and the cluster - pair scheme add interactions beyond the cut - off , which by definition evaluate to zero . to get a useful pair interaction rate which reflects the absolute performance , only the non - zero interactions should be considered . as shown on fig .[ zeros ] , at a commonly used cut - off distance of 1 nm the 4 setup adds 86% additional pair interactions .however , as shown later , with a more than doubled pair interaction evaluation rate , the 4 still proves to be faster than 1 .moreover , in certain cases we can actually make use of the extra pairs that the cluster scheme kernels add . in molecular dynamicsthe standard procedure is to ensure that no interacting pair of particles is ever missed .this is usually done by generating a pair list with a so - called verlet buffer which allows particles to move over a small distance without invalidating the pair list . at any step ,if any of the particles has moved by more than half the buffer length , the pair list is regenerated .this condition is sufficient , but not necessary .the pair list needs to be invalidated only when the distance between a pair of particles decreases by the buffer length , which will happen far less frequently .additionally , this commonly used setup is inconvenient for parallel simulations . in practice, we can often tolerate small imperfections in the pair list . in a constant temperature ensembleperfect energy conservation is not a requirement , as a thermostat will remove excess heat .moreover , the amount of energy drift that can be tolerated is very problem dependent .as there are multiple factors affecting the energy conservation in a simulation , we can allow the non - bonded interactions to cause a drift of similar magnitude like all other factors . if the buffer is too small , some particle pairs which are not in the pair list can move within the cut - off .we can determine an upper bound to the drift caused by such events in a constant temperature ensemble , this is derived in the appendix .the upper bound can be used to set the buffer size for simulations . with pme ,the pair potential at the cut - off is very small , hence the effect of missing pairs will also be very small . to quantify this effect ,we show the drift as a function of the verlet buffer size for spc / e water with a pair list lifetime of 18 fs , see fig .[ drift ] .this is a representative system as hydrogens in water are the fastest moving particles in nearly all atomistic simulations . with single precision floating pointcoordinates , the settle and shake constraint algorithms cause an energy drift of -0.01 and 0.1 /ns per atom , respectively .the 4 setup shows a drift of similar magnitude even without any additional buffering .thus , in practice , no explicit buffer is required in single precision .one thing to note is that at longer buffer length only repulsive hydrogen pairs contribute to the drift . at zero length ,attractive oxygen - hydrogen pairs also contribute which leads to a cancellation of errors . at a cut - off distance of 0.9 nm .the settle algorithm causes negative drift in single precision for large buffers .[ drift],width=283 ] the effective performance is given by the number of interactions within the cut - off radius that can be calculated per cycle . to compare the traditional and cluster schemes we show the performance of 1 , and 4 256-bit avx kernels , as well as 8 cuda gpu kernels with both rf and ewald electrostatics in table [ effperf ] .there is one factor that complicates the comparison .the ratio of the cost of the search and the force calculation affects the optimal list update frequency , which in turn affects the required buffer size . in our implementation ,the pair list construction for the 1 setup takes four times longer than calculating the interactions once , where for a 4 setup both take about equal time .we think there is some room for speed - up in our 1 search implementation , which has not been fully optimized .if we assume we can get it twice as fast , the optimal list update frequency is somewhere between 10 and 15 steps .the optimal update frequency for 4 and 8 is around 10 steps . for the following comparison we will use the same update frequency of 10 steps for all setups to simplify the comparison .the effective speed - up of the force calculation of the 4 over the 1 scheme on cpus is a factor of 1.8 and 1.4 for rf and ewald electrostatics , respectively .this speedup is mainly due to higher achieved pair rate , but the smaller buffer also contributes . assuming the 1 search cost can be brought down to twice the force calculation cost , the total performance improvement including the search cost is a factor of 2.0 and 1.5 for rf and ewald electrostatics , respectively .the 8 scheme results in a lower algorithmic work - efficiency due to the increase in the ratio of zero interactions calculated .note that these results are for a cut - off of 1 nm or 210 non - zero pairs per particle . with increasing cut - off radius , the efficiency increases and the performance improvement approaches a factor of 3 .as we run the pair search on the , slower , cpu , a longer list update interval often provides better total performance .the gpu kernels use a conditional for skipping pairs beyond the cut - off , unlike the cpu kernels , which use masking .therefore the pair - rate increases with buffer size . but calculating more pair distances always decreases the effective performance .still , the cluster algorithm demonstrates the potential of streaming architectures with an effective performance of a factor of 5 and 7 higher than the 4 cpu rf and ewald kernels , respectively . [ cols="^,^,^,^,^,^,^,^,^,^,^,^,^ " , ]for calculating non - bonded interactions in molecular simulations , the standard particle - based pair interaction algorithms commonly simd - parallelized by loop unrolling have reached their limits .kernels based on these approaches are often limited by the high memory to arithmetic operation ratio , the number of data shuffle operations required , and restrictions in instruction scheduling which reduces the potential for memory latency hiding .the pair list construction is affected by the same issues .we have presented a simple and flexible approach to overcome these problems .a scheme using cluster pairs of versus atoms leads to kernels that efficiently utilize current cpu and gpu simd units .the memory pressure is reduced by a factor on cpus .we found that =4 usually provides the best performance . on gpuswe use =8 and the memory pressure is reduced by another factor of 4 by loading and operating on up to 8 -clusters at once .the algorithm reorganizes the data representation at the lowest , particle level .therefore , any method in the literature that applies to particles , can be applied to the clusters in our method .an example used here is the verlet buffer .while the widely used linked cell list for reducing the search space can be applied , this does not offer any advantage as the locality of the clusters is already available through the grid used to generate the clusters .the performance advantage of our method over traditional algorithms depends on the computational cost of the interactions , the number of particle pairs within the cut - off and the simd width . while in many cases our cluster - based algorithm significantly outperforms the particle - based algorithms , in some cases it can be less advantageous . for cheap interactions the reduction of shuffling and memory operations will favor the cluster setup , whereas for expensive interactions the extra zero interactions can outweigh the gains . for typical atomistic molecular simulations our method performs very well and is a factor 1.5 to 3 faster on 8-wide simd than traditional methods .on intel sandy bridge cpus as well as cuda gpus the flop rate is above 60% of the peak .most importantly ,our scheme inherently maps well to future cpu and gpu architectures as well as existing ones not discussed here .as the number of floating point operation per load / store operation can be tuned , a reduction of the arithmetic cycles per kernel , e.g. by introduction of fma instructions , will result in higher performance . additionally , wider simd units , for example 16-way simd in intel xeon phi , can be used efficiently with a limited amount of effort .this work was supported by the european research council ( grants nr . 258980 and nr . 209825 ) , the swedish e - science research center and the scalalife eu fp7 project .the authors thank erik lindahl for providing the analytical approximation of the ewald correction force and for his advice on x86 simd optimization , nvidia for advice on cuda optimization and mark abraham for thoroughly reviewing the code and this manuscript .for a canonical ensemble , an upper bound on the average energy drift due to the finite verlet buffer size can be derived .this depends on the atomic displacements and the shape of the potential at the cut - off .the displacement - distribution along one dimension for a freely moving particle with mass over time at temperature is gaussian with zero mean and variance .the variance of the distance between two non - interacting particles is . in practice , particles interact with each other over time .these interactions make the displacement distribution narrower , since any interaction will hinder free motion of particles .ignoring the effect of interactions on the displacements thus provides an upper bound .we calculate interactions with a non - bonded interaction cut - off distance of and a pair list cut - off of , where is the verlet buffer size .we can then write the average energy drift over time for pair interactions between a particle of type 1 surrounded by particles of type 2 with number density , when the inter - particle distance changes from to , as : g\!\left(\frac{r_t - r_0}{\sigma}\right ) d r_0 \ , d r_t\\ & \approx & 4 \pi ( r_\ell+\sigma)^2 \rho_2 \int_{-\infty}^{r_c } \int_{r_\ell}^\infty \big [ v'(r_c ) ( r_t - r_c ) + \\ & & \phantom{4 \pi ( r_\ell+\sigma)^2 \rho_2 \int_{-\infty}^{r_c } \int_{r_\ell}^\infty \big [ } v''(r_c)\frac{1}{2}(r_t - r_c)^2 \big ] g\!\left(\frac{r_t - r_0}{\sigma}\right ) d r_0 \ , d r_t\\ & = & 4 \pi ( r_\ell+\sigma)^2 \rho_2 \bigg\ { \frac{1}{2}v'(r_c)\left[r_b \sigma g\!\left(\frac{r_b}{\sigma}\right ) - ( r_b^2+\sigma^2)e\!\left(\frac{r_b}{\sigma}\right ) \right ] + \\ & & \phantom{4 \pi ( r_\ell+\sigma)^2 \rho_2 \bigg\ { } \frac{1}{6}v''(r_c)\left [ \sigma(r_b^2+\sigma^2)g\!\left(\frac{r_b}{\sigma}\right ) - r_b(r_b^2 + 3\sigma^2 ) e\!\left(\frac{r_b}{\sigma}\right ) \right ] \bigg\}.\end{aligned}\ ] ] here , is a gaussian distribution with zero mean , unit variance , and .we always want to achieve small energy drift , so will be small compared to both and .thus , the approximations in the above equations are good since the gaussian distribution decays rapidly . to calculate the total energy drift, the drift needs to be averaged over all particle pairs and weighted with the particle count .24 natexlab#1#1[1]`#1 ` [ 2]#2 [ 1]#1 [ 1]http://dx.doi.org/#1 [ ] [ 1]pmid:#1 [ ] [ 2]#2 , ( ) ., , ( ) . , ( ) . , , , , , , , , , , ( ) . , , , ( ) . , , , , ( ) . , , , , ( ) . , , in : . , , , , in : . , , , , , ( ) . , , , , , , , , , ( ) . , , , , ( ) . , , , ( ) . , , , , , , ( ) . , , , , , , ( ) ., , , , , , , , , , , , ( ) . , ( ) ., , , ( ) . , , ( ) ., , , ( ) . , , , , , , , , ( ) . | calculating interactions or correlations between pairs of particles is typically the most time - consuming task in particle simulation or correlation analysis . straightforward implementations using a double loop over particle pairs have traditionally worked well , especially since compilers usually do a good job of unrolling the inner loop . in order to reach high performance on modern cpu and accelerator architectures , single - instruction multiple - data ( simd ) parallelization has become essential . avoiding memory bottlenecks is also increasingly important and requires reducing the ratio of memory to arithmetic operations . moreover , when pairs only interact within a certain cut - off distance , good simd utilization can only be achieved by reordering input and output data , which quickly becomes a limiting factor . here we present an algorithm for simd parallelization based on grouping a fixed number of particles , e.g. 2 , 4 , or 8 , into spatial clusters . calculating all interactions between particles in a pair of such clusters improves data reuse compared to the traditional scheme and results in a more efficient simd parallelization . adjusting the cluster size allows the algorithm to map to simd units of various widths . this flexibility not only enables fast and efficient implementation on current cpus and accelerator architectures like gpus or intel mic , but it also makes the algorithm future - proof . we present the algorithm with an application to molecular dynamics simulations , where we can also make use of the effective buffering the method introduces . * notice : * this is the author s version of a work that was accepted for publication in computer physics communications . changes resulting from the publishing process , such as peer review , editing , corrections , structural formatting , and other quality control mechanisms may not be reflected in this document . changes may have been made to this work since it was submitted for publication . |
the discrimination between a finite number of quantum states , prepared with given prior probabilities , is essential for many tasks in quantum information and quantum cryptography .since nonorthogonal states can not be distinguished perfectly , discrimination strategies have been developed which are optimal with respect to various figures of merit . based on the outcome of a measurement , in these strategiesa guess is made about the actual state of the quantum system .several strategies admit a certain probability , or rate , , of inconclusive results , where the measurement outcomes do not allow to infer the state . in the strategy of minimum - error discrimination inconclusive resultsare not permitted , and the overall probability of making a wrong guess is minimized with , which corresponds to maximizing the overall probability of getting a correct result .apart from studying some general features of the optimal measurement , in the beginning mainly the minimum - error discrimination of states obeying certain symmetry properties or of two mixed states was investigated , see e. g. .the minimum - error discrimination of more than two states that are arbitrary has gained renewed interest only recently .generalizing the concept of minimum - error discrimination , an optimal discrimination strategy has been studied which maximizes the total rate of correct results , , with a fixed rate of inconclusive results .this strategy also maximizes the relative rate for the fixed value , that is it yields the maximum achievable fraction of correct results referred to all conclusive results , which in most cases is larger for than for necessary and sufficient operator conditions for the optimal measurement have been derived , and it was shown that under certain conditions the strategy for maximizing with a fixed rate contains the optimal strategies for unambiguous discrimination or for discrimination with maximum confidence as limiting cases for sufficiently large values of .moreover , it was found that when the maximum of is known as a function of the fixed value , then from this function one can also obtain the maximum of in another optimal discrimination strategy where the error probability is fixed .optimal discrimination with a fixed rate has been also investigated for a modified optimization problem where in contrast to the usual assumption the prior probabilities of the states are unknown .solutions for optimal state discrimination with a fixed rate of inconclusive results and with given prior probabilities of the states have been recently derived in three independent papers : starting from the operator equations for the optimality conditions , in ref . we obtained analytical solutions for the discrimination of symmetric states and of two states occurring with arbitrary prior probabilities , where the two states are either pure or belong to a certain class of mixed qubit states . in refs . and the respective authors showed that the optimization problem with fixed can be solved by reducing it to a resulting minimum - error problem the solution of which is known and to an additional optimization .they derived analytical solutions for discriminating between two pure states occurring with arbitrary prior probabilities and between the trine states , and also for the discrimination of geometrically uniform states .the present paper goes beyond these previous investigations in several respects .based on the ideas of our earlier work we study general properties of the optimal measurement for discriminating with fixed between arbitrary qudit states in a -dimensional hilbert space .specializing on the case , we develop a method for treating the optimal discrimination with fixed between arbitrary qubit states , occurring with arbitrary prior probabilities .in contrast to the previous papers we take into account that often the optimal measurement is not unique , which means that the maximal probability of correct results with fixed can be obtained by a number of different measurements . in the special case , where the problem corresponds to minimum - error discrimination , our method differs from the approaches developed previously for treating the minimum - error discrimination of arbitrary qubit states .we obtain explicit analytical results for several problems that have not been solved before , mostly for the discrimination with fixed between qubit states which posses a certain partial symmetry , but also for discriminating equiprobable qubit states and for the discrimination between a pure state and a uniformly mixed state in a -dimensional hilbert space .the paper is organized as follows . in sec .ii we start by considering the optimal discrimination with fixed for qudit states . after thiswe specialize on the discrimination of qubit states in a two - dimensional hilbert space and develop a method for solving the problem . sec .iii is devoted to the discrimination of partially symmetric qubit states .we conclude the paper in sec .iv with discussing the relation of our method to previous studies of the minimum - error discrimination of arbitrary qubit states and with a brief summary of results .the detailed derivations referring to sec .iii are presented in the appendix .we consider the discrimination between qudit states , given by the density operators ( ) and prepared with the respective prior probabilities , where .a complete measurement performing the discrimination is described by positive detection operators with where is the identity operator in the -dimensional hilbert space jointly spanned by the density operators .the conditional probability that a quantum system prepared in the state is inferred to be in the state is given by , while is the conditional probability that the measurement yields an inconclusive outcome which does not allow to infer the state .the overall probability of inconclusive results is then given by and the probability of correct results , , reads where denotes the total rate of errors .our task is to maximize , or minimize , respectively , under the constraint that is fixed at a given value . upon introducing a hermitian operator and a scalar real amplifier the necessary and sufficient optimality conditions take the form ( ) . for and conditions refer to minimum - error discrimination , where .provided that eqs .( [ optz1 ] ) and ( [ optz2 ] ) are fulfilled , the rate of correct results takes its maximum value , given by the detection operators ( ) satisfying the optimality conditions need not always be unique , which means that different measurements can be optimal , yielding the same value .this will be outlined later in this section and will be illustrated by examples in sec .it is useful to introduce the ratio of the maximal rate of correct results , and the rate of conclusive results , by taking the derivative we find with the help of eq .( [ pcmax- ] ) that here we used the fact that due to the positivity constraint the condition is equivalent to since for , the operator must turn into a full - rank operator when approaches unity .( [ optz1 ] ) therefore implies that for sufficiently large values of the condition is necessarily fulfilled .when for the given states the optimality conditions can be satisfied with then due to eq .( [ lim3 ] ) the ratio grows with increasing if and stays constant for .( [ lim3a ] ) defines as the smallest value of at which further increasing does not yield any advantage , since the relative rate of correct results , given by , remains constant .if the operator can not be a full - rank operator , due to the equality condition in eq .( [ optz1 ] ) . for , this condition does not put any restriction on the rank of .( [ lim3a ] ) takes into account that for the expressions for and determining the optimal solution are not unique , see the example at the end of sec .ii a. let us first review the limiting case of large where does not change when is increased , which means that after substituting this expression for into eq .( [ optz2 ] ) we obtain the conditions and for , where and .these conditions require that the constant , that is the largest eigenvalue of , characterizes the maximum confidence that can be achieved for the individual measurement outcome .in other words , is equal to the maximum achievable value of the ratio which denotes the conditional probability that the outcome is correct given it was obtained . due to eq .( [ a - opt ] ) the conditions for optimality following from eq .( [ optz2 ] ) require that for states where that is where the support of spans the whole hilbert space .this means that only the states for which the maximum confidence is largest , , are guessed to occur in an optimal measurement where , that is where according to eq .( [ lim3a ] ) . using eq .( [ pcmax- ] ) with and we arrive at where is defined by eq .( [ lim3a ] ) .the actual value of depends on the given states . in special cases where the maximum relative probability of correct results not be increased at all by admitting inconclusive results but stays always equal to , see for instance the example of equiprobable states resolving the identity operator discussed in ref . .if the explicit expression for can be easily determined with the help of eq .( [ a - opt ] ) . the equality conditions in eq .( [ optz2 ] ) then restrict the supports of the non - zero operators to certain known subspaces . is the smallest value of that satisfies on the condition that the operators have their supports in the given subspaces . is a full - rank operator for when decreases starting from 1 , at a certain value one of the positive eigenvalues of may become zero .if an eigenvalue immediately turns negative when is decreased beyond , then is equal to . in this case is a full - rank operator in the whole region , see the example at the end of sec .ii a , but this need not be valid in general .now we exploit the optimality conditions without the restriction to the limiting case of large . if , the equality in eq .( [ optz1 ] ) can be only fulfilled when the support of the operator does not span the full hilbert space .this implies that at least one of its eigenvalues is equal to zero and the determinant therefore vanishes , yielding the condition without lack of generality we suppose that in the optimal measurement the detection operators are different from zero for states ( ) and vanish for the remaining states , which means that the remaining states , if any , are never guessed to occur , in analogy to eq .( [ det1 ] ) we then arrive at the condition the requirement that the determinants in eqs .( [ det1 ] ) and ( [ det2 ] ) vanish yields a system of real equations with only real variables , namely the parameter and the real quantities determining the matrix elements of the hermitean operator acting in .when the quantum states and their prior probabilities are completely arbitrary , the system of equations therefore does not have a solution for .in special cases , however , a solution can exist where more than states are guessed to occur , for instance when the states have a certain symmetry . in these caseswe obtain the same operator and the same parameter when for some of the states we drop eq .( [ det2 ] ) , keeping it for or less states , and when we put the detection operators corresponding to the dropped states equal to zero , see the examples treated in sec .the measurement then differs from the one described by eq .( [ op ] ) , but yields the same value when an optimal measurement exists where more than states are guessed , this measurement is therefore not unique .the above considerations lead to the conclusion that we never need to make a guess for more than states in order to discriminate optimally between states in a -dimensional hilbert space with a fixed rate of inconclusive results .this conclusion generalizes previous results that have been obtained in a different way in the context of minimum - error discrimination , where .it follows that for discriminating optimally with fixed between more than arbitrary states we have to consider all possible subsets containing states separately , where , and we have to find the maximal rates of correct results for discriminating the states within each subset .the largest of these rates then determines the optimal solution for discriminating the states , in analogy to the findings obtained for the minimum - error discrimination of qubit states .the operator and the parameter satisfying eqs .( [ det1 ] ) - ( [ det2 ] ) only determine the optimal solution if the optimality conditions , eqs .( [ optz1 ] ) and ( [ optz2 ] ) , can be fulfilled under the constraint that .if for this is not the case , we have to try whether for a solution exists , and so on until where always the same state is guessed to occur when a conclusive outcome is obtained .the described procedure will be applied in sec .ii b for the case of qubit states .an interesting special case arises when one of the states , say the first , has the property that for , implying that .it is obvious that eqs .( [ optz1 ] ) and ( [ optz2 ] ) and also the completeness relation , eq .( [ compl-0 ] ) , are then fulfilled if the explicit expressions for and follow from replacing by in eq .( [ optz1 ] ) , taking into account that and that the positivity constraint has to be satisfied . the condition restricts the validity of this solution to a certain region , and for this region it follows from eq .( [ pcmax- ] ) that , see the example treated below .clearly , for minimum - error discrimination , where , eq . ( [ ma ] ) yields , which means that the maximum probability of correct results can be obtained without performing any measurement , simply by always guessing the state with the largest prior probability to occur . in general, it is very hard to obtain analytical solutions of the optimization problem with fixed for in our previous paper we derived the solution for a special class of linearly independent symmetric pure qudit states . in the followingwe treat another example , which contains also the special case discussed above .let us consider the optimal discrimination between a uniformly mixed qudit state and a pure qudit state , both living in the same hilbert space and occurring with the prior probabilities and , respectively . taking into account that the spectral representation of is given by where is a -dimensional projector , we find that the optimality conditions , eqs .( [ optz1 ] ) - ( [ optz2 ] ) , are satisfied for , or , equivalently , for , if while for , corresponding to , we obtain in agreement with eq .( [ ma ] ) the solution in both cases we get . since the detection operators have to be positive , this solution , where , only holds true if with the help of eq .( [ pcmax- ] ) we find that for this result agrees with the solution derived earlier for minimum - error discrimination .note that for where optimal measurement is not unique , as becomes obvious from comparing eqs .( [ umix2 ] ) and ( [ umix3 ] ) .it is easy to check that the relative rate of correct results , , grows for both lines of eq .( [ umix1 ] ) when the fixed rate is increased , until it reaches unity for .since according to eq .( [ a - opt ] ) the maximum confidences for discriminating and are given by and respectively , eq .( [ umix1 ] ) yields hence for the optimal measurement can be alternatively obtained when with , as becomes obvious from eq .( [ lim2 ] ) . using eqs .( [ optz1 ] ) - ( [ optz2 ] ) we find that the optimality conditions are satisfied if since the detection operators have to be positive , this result only holds true if that is we therefore arrive at which because of eq .( [ pc ] ) expresses the fact that errors do not occur in the optimal measurement for hence in our example is the smallest rate of inconclusive results for which an unambiguous discrimination is possible .( [ umix2 ] ) or ( [ umix3 ] ) , as well as eq .( [ umix5 ] ) , show that for the optimal measurement is a projection measurement with , , and , which discriminates unambiguously , that is with the maximum confidence , and yields an inconclusive result when the state is present .while the operator is a rank - one operator for , it follows from eq .( [ umix5 ] ) that has the rank if that is in this example turns into a full - rank operator at . as mentioned after eq .( [ lim3a ] ) , for the solution does not have any practical relevance since the relative rate of correct results remains constant with growing and its absolute rate decreases . in the rest of the paper we do not provide the explicit expressions of the detection operators for from now on we specialize our investigations on the optimal discrimination of qubit states in a two - dimensional joint hilbert space it is sufficient to treat the case where the optimality conditions are satisfied with ( ) and when the first condition does not hold , the optimal measurement is determined by eq .( [ ma ] ) , while the violation of the second condition corresponds to the limiting case of large where eq .( [ lim2 ] ) applies .following our earlier paper we conclude that for and the equalities in the optimality conditions , eqs .( [ optz1 ] ) and ( [ optz2 ] ) , imply that the non - zero optimal detection operators and are proportional to the projector onto the eigenstates and belonging to the eigenvalue zero of the operators and , respectively .hence if and for with , see eq .( [ op ] ) , the optimality conditions require that , where and . since we supposed that for the completeness relation , eq .( [ compl-0 ] ) , is given by due to eqs .( [ detec0 ] ) and ( [ detec1 ] ) the projectors onto the normalized states and that are orthogonal to and , respectively , are determined by the operator and the parameter via the relations , where .( [ compl ] ) we find that the completeness relation takes the alternative form for solving the optimization problem we have to determine the parameter and the operator which satisfy eqs .( [ detec0 ] ) - ( [ compl ] ) , or , equivalently , eqs .( [ perp ] ) - ( [ compl1 ] ) , for a certain value of on the condition that the constants are positive . for this purposewe can proceed in two steps : in the first step we use eq .( [ det1 ] ) together with the positivity constraint in order to express and in dependence of the matrix elements of taken with respect to an orthonormal basis in .it is advantageous to choose the particular basis that is the eigenbasis of . introducing the spectral representation where , we represent the operator as the requirement see eq .( [ det1 ] ) , then leads to = |z_{01}|^2. ] with since these conditions can only be simultaneously satisfied for two or more different phases if , we have to search for an operator where =0.\ ] ] we note that the requirement =0 ] . taking the positivity constraint and the relation into account, we find that the optimality condition in eq .( [ optz1 ] ) is satisfied if where the expression for is in accordance with eq .( [ a - sol ] ) .note that if both eigenvalues of vanish which means that and which corresponds to the case , yielding eq .( [ lim2 ] ) . due to eq .( [ pcmax- ] ) the maximum probability of correct results for that is for takes the form in the second step we apply eq .( [ det2 ] ) and the completeness relation , represented by eq .( [ compl1 ] ) , together with the positivity constraints given by eq .( [ constr ] ) .( [ m - det ] ) we arrive at the conditions =\eta^2 p^2\ , b(1\!-b)\,\;\;\;{\rm if}\;a_1>0,\qquad\\ ( z_{00}\!-\!\eta^{\prime } s^{\prime})[z_{11}\ !-\!\eta^{\prime}(1\!-\!s^{\prime})]=\eta^{\prime 2}p^{\prime 2}c(1\!-\!c)\;\;\,\,{\rm if}\;a_2>0,\quad \end{array}\ ] ] where we introduced because of eq .( [ detec1 ] ) the condition is equivalent to requiring that at least one state from the first group is guessed , and means that at least one state from the second group is inferred to occur .next we make use of eqs .( [ compl1 ] ) and ( [ perp ] ) .because of the explicit form of the operator , given by eq .( [ pcmax1 ] ) , the diagonal matrix elements of eq .( [ compl1 ] ) lead to the conditions where the constants and read since the non - diagonal elements of eq .( [ compl1 ] ) yield the condition due to the symmetry properties of the partially symmetric states , eq .( [ compl2c ] ) is certainly fulfilled if that is if in the optimal measurement the detection operators for the individual states obey the same symmetry as their density operators , as can be seen with the help of eqs .( [ detec11 ] ) and ( [ perp ] ) .however , other choices of the coefficients and therefore of the detection operators are also possible .taking into account that for see eq .( [ phi ] ) , we find for instance that if for even numbers and for odd numbers while all other coefficients vanish .similarly , various solutions with exist for the coefficients that belong to the states in the second group and depend on . moreover , if eq .( [ compl2c ] ) may be also solved in a way where the terms referring to the two groups of states do not vanish separately . in order to find an optimal measurement we have to determine the values of , , and that satisfy eqs .( [ two1a ] ) - ( [ f ] ) with and , where according to eq .( [ optz2 ] ) the positivity constraints have to be fulfilled . for ,where both lines of eq .( [ two1a ] ) apply , the positivity constraints are satisfied provided that while for and the additional condition \geq \eta^{\prime 2 } p^{\prime 2 } c(1-c)\ ] ] has to be taken into account .a similar condition arises when and since in general the resulting solutions are rather involved , we treat the general case only in implicit terms and provide explicit derivations of special solutions in the appendix .two different cases have to be considered : _ ( i ) guessing states from both groups is optimal ._ for eq .( [ two1a ] ) represents a system of two coupled equations from which we obtain expressions for and after solving a quadratic equation . when these expressions are inserted into eqs .( [ compl2a ] ) and ( [ compl2b ] ) , the resulting linear system of equations yields the values of and , which depend on due to eq .( [ f ] ) . provided that these values are indeed positive and that eq .( [ constr2 ] ) is satisfied , we have obtained the optimal solution .if this is not the case , a measurement where states from both groups are guessed can not be optimal . _( ii ) guessing states from only one group is optimal . _ to be specific , let us assume that only the states from the first group are guessed , which means that and . from eqs .( [ compl2a ] ) and ( [ compl2b ] ) with we obtain the two equations .\ ] ] using in addition the first line of eq .( [ two1a ] ) and taking eq .( [ f ] ) into account , we arrive at three equations for determining and in dependence of . provided that indeed and that eqs .( [ constr2 ] ) and ( [ constr2a ] ) are satisfied , the optimal solution has been determined . for the case where only the states from the second group are guessed we can proceed in an analogous way . in the next two subsections we present the complete solutions for two simplified but non - trivial discrimination problems for partially symmetric states . in our first problemwe suppose that the purity of the states in the two groups is the same , , and that the states occur with the same prior probability , .we denote the eigenstates of in such a way that , given by eq .( [ rhodiag ] ) , is the largest eigenvalue , \geq \frac{1}{2}.\ ] ] to be specific , we suppose that which implies that if . with the help of eq .( [ maxconf1 ] ) this yields the relation for the maximum achievable confidences for discriminating the states in the two groups , where the equality sign holds if . as derived in the appendix , the maximum probability of correct results with fixed then given by the solution where and , in agreement with eq .( [ pmax ] ) , while and are defined by eqs .( [ solq3 ] ) and ( [ a-2 ] ) in the appendix , respectively .three different regions of have to be distinguished : for , corresponding to the full lines in fig .2 , states from both groups are guessed in the optimal measurement. clearly , this only applies if , that is if . is then determined by , in agreement with the general result derived in eq .( [ pmax ] ) . in this regionthe optimum detection operators are and for , where the states are given by eq .( [ psi - j ] ) .the values of the constants can be determined by applying the method described in sec .iii b , using the explicit expressions for and given in the appendix . in the region where , corresponding to the dashed lines in fig .2 , only the states of the first group are guessed in the optimal measurement .the optimal detection operators follow from the expression for , see eq .( [ z0 ] ) , and from eqs .( [ perp ] ) and ( [ detec11 ] ) .the constants can be obtained as described sec .iii b , with the help of the value of given by eq .( [ a1 ] ) . if and therefore the second line of eq .( [ solq ] ) holds true in the whole region . since , the bloch vectors of the statesare then confined to the upper hemisphere of the bloch sphere .the region where corresponds to the limiting case of large described in sec .ii a. in this region the ratio , does not increase anymore with growing but stays equal to , see the dotted lines in fig .2 . the squares in fig . 2 indicate the points where for different numbers of and .two special cases are worth mentioning . for ,that is for minimum - error discrimination , we obtain from eq .( [ solq ] ) when the first line applies , states from both groups are guessed in the optimal measurement , while otherwise only states from the first group are guessed to occur .interestingly , the solution only depends on the total number of the states and is independent of their distribution over the two groups .the second special case refers to , that is to the discrimination of equiprobable symmetric mixed qubit states .( [ r1 ] ) then reduces to and requires that .this means that the first line of eq .( [ solq ] ) does not apply , and that is determined by the first line of eq .( [ a-2 ] ) .this solution coincides with our earlier result that was derived for symmetric detection operators .in contrast to this , the present derivation does not impose any restriction on the choice of the detection operators .in particular , it shows that for symmetric mixed qubit states an optimal discrimination with fixed can be always accomplished when not more than three ( for odd ) or two ( for even ) states are guessed to occur , as follows from eqs .( [ even ] ) and ( [ odd ] ) . for , that is for minimum - error discrimination , the optimum measurement therefore can be always realized by a simple projection measurement if is even .now we assume that the states are pure , , and that the respective prior probabilities and in the two groups of states can be different , where . in order to obtain the complete solution for arbitrary values of we have to consider both the cases where the relations and hold true between the maximum confidences for the states in the two groups , see eqs .( [ pure4a ] ) and ( [ pcmax2 ] ) in the appendix .we present explicit results for the simplest case , where , and , that is where the states are given by latexmath:[\[\label{mirror } and occur with the prior probabilities for and , and for respectively .these states correspond to the three mirror - symmetric pure states the minimum - error discrimination of which has been previously investigated .the condition holds true provided that , where for we get as shown in the appendix , for the maximum probability of correct results reads where is determined by eqs .( [ solq3 ] ) and ( [ solq2a ] ) with , and where , , and are given by eqs .( [ crit ] ) - ( [ mirror1a ] ) with .on the other hand , for we obtain where , and are determined by eqs .( [ mirror1a ] ) - ( [ mirror3 ] ) . in the regions of where the solution is given by , corresponding to the dashed lines in fig .3 , only the states and are guessed to occur in the optimal measurement . on the other hand ,all three states are guessed if for and if for , which corresponds to the full lines in fig .3 . for find that if as outlined in the appendix , for the first line of eq .( [ pure3prime ] ) does not apply , and the second line is valid in the whole range since the dotted lines in fig .3 correspond to the limiting case of large , and the squares indicate the points where this limiting case is reached . for minimum - error discrimination , where , the maximum probability of correct results does not depend on the relation between and .( [ solq1 ] ) and ( [ pure3prime ] ) yield \ ; & \mbox{if } \quad\\ \!\!\eta\left(1 + 2\sqrt{b(1-b}\right ) \ ; & \mbox{if } , \quad\\ \end{array } \right.\ ] ] which coincides with the result obtained already in ref .when all three states are guessed in the measurement performing minimum - error discrimination , while otherwise only the states and are guessed to occur .before concluding the paper , we briefly discuss the relation of our method to previous investigations of the minimum - error discrimination of arbitrary qubit states , where . from eq .( [ perp ] ) we obtain the representation , where due to eq .( [ compl1 ] ) for minimum - error discrimination the condition has to be satisfied with non - negative values of . upon eliminating the operator , we obtain from the first equality in eq .( [ mx ] ) for any pair of states the equation ( ) .( [ my ] ) has been recently derived in an alternative way , and has been applied to study the minimum - error discrimination of qubit states using a geometric formulation .in contrast to this , in our method , which refers to the general case , the operator is not eliminated . rather , our approach essentially rests on determining and is therefore for related to the treatments of minimum - error discrimination in refs . , and . in this paperwe investigated the discrimination of mixed quantum states by an optimal measurement that yields the maximum probability of correct results , , while the probability of inconclusive results is fixed at a given value .for the discrimination of qudit states in a -dimensional hilbert space , we discussed the general properties of the optimal measurement .moreover , we derived the analytical solution for optimally discriminating with fixed between a uniformly mixed and a pure qudit state . in the main part of the paper we specialized on the optimal discrimination of qubit states in a two - dimensional hilbert space and developed a general method to obtain the solution .we studied the special case where the prior probabilities of the qubit states are equal , and we also treated the discrimination between four or less arbitrary qubit states with fixed .as an illustrative application of our method , we derived explicit analytical results for discriminating qubit states which posses a partial symmetry .we emphasize that apart from determining , our method also allows to consider the various possible realizations of the optimal measurement for a given discrimination problem .in particular , we found that for discriminating symmetric qubit states the maximum probability of correct results with fixed can be for instance also achieved by a measurement where only three of the states are guessed to occur when is odd , and only two of the states when is even , instead of guessing all states .note added : after submitting this work a related paper appeared .in this appendix we provide the detailed derivations for the results presented in sec .we start by treating the case where only states from the first group are guessed , that is using eq .( [ first ] ) and the first line of eq .( [ two1a ] ) , we obtain with and . from eq .( [ z0 ] ) it follows that the condition is equivalent to where ^{2}\ ] ] with for . making use of eq .( [ f ] ) we therefore arrive at and due to eq .( [ first ] ) we obtain the explicit result in the upper line we took into account that corresponds to which implies that , and we used a similar relation for the lower line . since , eq .( [ solq2 ] ) can only be fulfilled and hence eq . ( [ z0 ] ) can only determine the optimal solution if where corresponds to from eqs .( [ z0 ] ) and ( [ sol ] ) we obtain the maximum probability of correct results with since the expressions in the upper and lower lines of eqs .( [ solq2 ] ) - ( [ a-2 ] ) are identical for and we can replace the conditions by , and similarly by , if we extend the restriction to .( [ a-2 ] ) describes the optimal solution provided that and determined by eq .( [ z0 ] ) , satisfy the positivity constraints given by eq .( [ constr1 ] ) .analogous results can be obtained for the case where only states from the second group are guessed to occur , that is where .we still need to study the case where states from both groups are guessed in the optimal measurement , that is where and .we shall do this in the following , where we derive the complete solutions for the problems discussed in secs .iii c and iii d. _ equiprobable states with equal purity ._ in accordance with eq .( [ r1 ] ) we assume that and , which for and corresponds to . supposing that states from both groups are guessed , that is , we obtain from eq .( [ two1a ] ) the solution which clearly satisfies the positivity constraints given by eq .( [ constr1 ] ) . taking into account that , eqs .( [ compl2a ] ) - ( [ f ] ) yield the constants and . since , cf .( [ r1 ] ) , can not be negative .on the other hand , the condition requires that does not exceed the critical value . with the help of eq .( [ sol ] ) we thus arrive at the first line of eq .( [ solq ] ) . the second line of eq .( [ solq ] ) refers to the case where the solution determined by eqs .( [ z0 ] ) and ( [ a-2 ] ) is optimal , as will be shown below by verifying that eq .( [ constr1 ] ) is satisfied .the third line corresponds to eq .( [ lim2 ] ) . with the help of eq .( [ solq2a ] ) we find after a little algebra that and . hence for it follows that which means that for always the first line of eq .( [ a-2 ] ) applies .it remains to be shown that for the positivity constraints given by eq .( [ constr1 ] ) are fulfilled when and are determined by eq .( [ z0 ] ) . from eqs . ( [ constr2 ] ) and ( [ constr2a ] ) with and find after minor algebra that eq .( [ constr1 ] ) is satisfied if , which because of eq . ( [ solq2 ] ) yields the two conditions if and if the first condition requires that .the second condition is fulfilled for , as becomes obvious from the second line of eq .( [ solq2 ] ) and from the validitiy of the relation hence eq . ( [ constr1 ] )is indeed satisfied and we have derived eq .( [ solq ] ) . _ pure states with different prior probabilities . _ for we obtain with the help of eq .( [ maxconf1 ] ) the relation where and , while with .supposing that states from both groups are guessed , we get from eq .( [ two1a ] ) the solutions and which have to positive when eq .( [ constr2 ] ) holds . to be specific, we again assume that .the condition then implies that , and a corresponding relation is valid when . from eq .( [ sol ] ) we obtain the solution which holds true for certain regions of where the positivity constraint given by eq .( [ constr1 ] ) is satisfied and where eqs .( [ compl2a ] ) - ( [ f ] ) , resulting from the completeness relation , can be fulfilled with positive values of and . outside these regionsonly states from one of the groups will be guessed in the optimal measurement .now we specialize on the discrimination of three mirror - symmetric pure states , given by eq .( [ mirror ] ) , where and .( [ maxconf1 ] ) yields the respective maximum confidences for and , and for the state .provided that all three states are guessed , that is , we obtain from eq .( [ two1a ] ) the solution and . for ,that is for , it follows that the relation holds true when all three states are guessed .( [ compl2a ] ) - ( [ f ] ) then yield the solutions and . herewe introduced a critical value , given by ^ 2 } , \quad{\rm where } \;\ ; q_{cr}\geq0\;\ ; { \rm if}\;\;\eta \leq \eta_{cr}\ ] ] with ^{-1} ] taking the definition of into account , we therefore obtain from eq . ( [ solq2 ] ) the two conditions if and if for , or , respectively , the first condition requires that , while the second condition is always fulfilled for , as follows from the second line of eq .( [ solq2 ] ) and from the relation which holds for . for second condition requires that since , which means that it only applies if , or , respectively , where the first condition is always satisfied .( [ constr1 ] ) indeed restricts the range of validity of eq .( [ z0 ] ) to the regions if and if author acknowledges partial financial support by deutsche forschungsgemeinschaft dfg ( sfb 787 ) . | we study the discrimination of mixed quantum states in an optimal measurement that maximizes the probability of correct results while the probability of inconclusive results is fixed at a given value . after considering the discrimination of states in a -dimensional hilbert space , we focus on the discrimination of qubit states . we develop a method to determine an optimal measurement for discriminating arbitrary qubit states , taking into account that often the optimal measurement is not unique and the maximum probability of correct results can be achieved by several different measurements . analytical results are derived for a number of examples , mostly for the discrimination between qubit states which possess a partial symmetry , but also for discriminating equiprobable qubit states and for the dicrimination between a pure and a uniformly mixed state in dimensions . in the special case where the fixed rate of inconclusive results is equal to zero , our method provides a treatment for the minimum - error discrimination of arbitrary qubit states which differs from previous approaches . |
a functional regression model with functional response variable can be defined by where ( ) stands for batches ( or curves ) of functional data , is an unknown nonlinear function , depending on a set of functional covariates and a set of scalar covariates , and is the random error .a special case of such model is the following concurrent regression model with functional covariates ( see e.g. * ? ? ?* ) however , when the relationship between the response and the covariates can not be justified as linear , it is intractable to model the function nonparametrically for multi - dimensional since most nonparametric regression models suffer from the _ curse of dimensionality_. a variety of alternative approaches with special model structures have been proposed to overcome the problem ; examples include dimension reduction methods , the additive model ( see e.g. * ? ? ?* ) , varying - coefficient model ( see e.g. * ? ? ?* ; * ? ? ?* ; * ? ? ?* ) , and the neural network model ( see e.g. * ? ? ? proposed a gaussian process functional regression ( gpfr ) model , which is defined by where is the mean structure of the functional data and represents a gaussian process regression ( gpr ) model having zero mean and covariance kernel ( for the detailed definition of gaussian process regression models , see * ? ? ?* ; * ? ? ?this nonparametric concurrent functional regression model can address the regression problem with multi - dimensional functional covariates and model the mean structure and covariance structure simultaneously ; see the detailed discussion in .the aim of this paper is to extend the concurrent gpfr model to situations where the response variable , denoted by , is known to be non - gaussian .the work is motivated by the following example , concerning data collected during standing - up manoeuvres of paraplegic patients .the outputs are the human body s standing - up phases during rising from sitting position to standing position .specifically , takes value of either 0 , 1 or 2 , corresponding to the phases of ` sitting ' , ` seat unloading and ascending ' or ` stablising ' respectively , required for feeding back to a simulator control system .since it is usually difficult to measure the body position in practice , the aim of the example is to develop a model for reconstructing the position of the human body by using some easily measured quantities such as motion kinematic , reaction forces and torques , which are functional covariates denoted by .this is to investigate the regression relationship between the non - gaussian functional response variable and a set of functional covariates .since the standing - up phases are irreversible , is an ordinal response variable , taking value from three ordered categories .if we assume that there exists an unobservable latent process associated with and the response variable depends on this latent process , then by using a probit link function , we can define a model as follows : where , , and are the thresholds .now the problem becomes how to model by the functional covariates , or how to find a function such that .more discussion of this example is given in section [ para_data ] and appendix g of the supplementary materials .generally , letting be a given link function , a generalized linear regression model is defined as . proposed a generalized linear mixed model to deal with heterogeneity : where is the coefficient for the fixed effect and is a random vector representing random effect .however , if we have little practical knowledge on the relationship between the response variable and the covariates ( such as the case in the above paraplegia example ) , it is more sensible to use a nonparametric model . in this paper, we propose to use a gaussian process regression model to define such a nonparametric model , namely a concurrent generalized gaussian process functional regression ( ggpfr ) model .similar to gpfr model , the advantages of this model include : ( 1 ) it offers a nonparametric generalized concurrent regression model for functional data with functional response and multi - dimensional functional covariates ; ( 2 ) it provides a natural framework on modeling mean structure and covariance structure simultaneously and the latter can be used to model the individual characteristic for each batch ; and ( 3 ) the prior specification of covariance kernel enables us to accommodate a wide class of nonlinear functions .this paper is organized as follows .section 2 proposes the ggpfr model and describes how to estimate the hyper - parameters and how to calculate prediction , for which the implementation is mainly based on laplace approximation . the asymptotic properties , focusing on the information consistency ,are discussed in section 3 .several numerical examples are reported in section 4 .discussion and further development are given in section 5 .some technical details and more numerical examples are provided as the supplementary materials .let be a functional or longitudinal response variable for the -th subject , namely the -th batch .we assume that s are independent for different batches , but within the batch , and are dependent at different points .we suppose that has a distribution from an exponential family with the following density function where and are canonical parameter and dispersion parameter respectively , both functional .we have and , where and are the first two derivatives of with respect to .suppose that is a -dimensional vector of functional covariates .nonparametric concurrent generalized gaussian process functional regression ( ggpfr ) models are defined by and the following here , the unobserved latent variable is modeled by a nonparametric gpr model via a gaussian process prior , depending on the functional covariates .the gpr model is specified by a covariance kernel , and by the karhunen - love expansion where , are the eigenvalues and are the associated eigenfunctions of the covariance kernel .one example of is the following squared exponential covariance function with a nonstationary linear term : where is a set of hyper - parameters involved in the gaussian process prior .the hyper - parameter corresponds to the smoothing parameters in spline and other nonparametric models .more specifically , is called the length - scale .the decrease in length - scale produces more rapidly fluctuating functions and a very large length - scale means that the underlying curve is expected to be essentially flat .more information on the relationship between smoothing splines and gaussian processes can be found in .we can use generalized cross - validation ( gcv ) or empirical bayesian method to choose the value of .when is large , gcv approach is usually inefficient .we will use the empirical bayesian method in this paper ; the details are given in the next subsection .some other covariance kernels such as powered exponential and matrn covariance functions can also be used ; see more discussion on the choice of covariance function in and . in the model given by the response variable depends on at the current time only , therefore the proposed model can be regarded as a generalization of the concurrent functional linear model discussed in . in this modelthe common mean structure across batches is given by .if we use a linear mean function which depends on a set of scalar covariates only , can be expressed as in this case the regression relationship between the functional response and the functional covariates is modeled by the covariance structure .other mean structures , including concurrent form of functional covariates , can also be used .the proposed model has some features worth noting .in addition to those discussed in section 1 , we highlight that the ggpfr model is actually very flexible .it can model the regression relationship between the non - gaussian functional response and the multi - dimensional functional covariates nonparametrically .moreover , if we had known some prior information between ( or ) and some of the functional covariates , we could easily integrate it by adding a parametric mean part .for example we may define i.e. including a term in the ggpfr similar to the generalized linear mixed model ; an example of such models is provided in appendix g. the nonparametric part can still be modeled by via a gpr model .other nonparametric covariance structure can also be considered ; some examples can be found in , and .however , most of these methods are limited to small ( usually one ) dimensional or the covariance matrix with a special structure . as an example of the ggpfr model, we consider a special case of binary data ( e.g. for classification problem with two classes ) . in this case , .if we use the logit link function , the density function is given by z_m(t ) \ } } { 1 + \exp \{{\mbox{{\boldmath } } } { \mbox{\boldmath }}(t)+\tau_m(t ) \ } } .\label{binden}\ ] ] the marginal density function of is therefore given by where is the density function of , which is a multivariate normal distribution for any given points and depends on the functional covariates and the unknown hyper - parameter .the density functions for other distributions from the exponential families can be obtained similarly .now suppose that we have batches of data from subjects or experimental units . in the -th batch , observations are collected at .we denote , and by , and , respectively , for and .collectively , we denote and , and denote , , and in the same way .they are the realizations of , and at . a discrete ggpfr model is therefore given by for , where is a distribution from the exponential family and . has an -variate normal distribution with zero mean and covariance matrix for .here we assume a fixed dispersion parameter , but the method developed in this paper can be applied to more general cases .we consider the estimation of first . to estimate the functional coefficient , we expand it by a set of basis functions ( see e.g. * ?* ) . in this paper, we use b - spline approximation .let be the b - spline basis functions , then the functional coefficient can be represented as , where the -th column of , , is the b - spline coefficients for .thus , at the observation point , we have , where is an matrix with the -th element . in practice ,the performance of the b - spline approximation may strongly depend on the choice of the knots locations and the number of basis functions .there are three widely - used methods for locating the knots : equally spaced method , equally spaced sample quantiles method and model selection based method .the guidance on which method is to use in different situations can be found in .the first method is used in our numerical examples in section 4 which all have equally - spaced time points and the second is adopted in the pbc data in the supplementary materials .the number of basis functions can be determined by generalized cross - validation or aic ( bic ) methods .more details on this issue can be found in .the covariance matrix of depends on and the unknown hyper - parameter .if we use covariance kernel ( [ covfun0 ] ) , its element is given by the covariance matrix involves the hyper - parameter , whose value is given based on the prior knowledge in conventional bayesian analysis .as discussed in , empirical bayesian learning method is preferable for gpr models when the dimension of is large .the idea of empirical bayesian learning is to choose the value of the hyper - parameter by maximizing the marginal density function .thus , as well as the unknown parameter can be estimated at the same time by maximizing the following marginal density or the marginal log - likelihood where is derived from the exponential family as defined in . for binomial distribution, it is given in .obviously the integral involved in the above marginal density is analytically intractable unless has a special form such as the density function of normal distribution .one method to address this problem is to use laplace approximation .we denote then the log - likelihood can be rewritten as let be the maximiser of , then by laplace approximation we have where is the second order derivative of with respect to and evaluated at ( see , for example , * ? ? ?* ; * ? ? ?the procedure of finding the maximiser can be carried out by the newton - raphson iterative method and is given in appendix a of the supplementary materials . however , as pointed out in section 4.1 in , the error rate of the approximation may be since the dimension of increases with the sample size .a better method is to approximate in ( here and in the rest of the section the conditioning on is omitted for simplicity ) by where , is the gaussian approximation to the full conditional density , and is the mode of the full conditional density of for a given . here , we approximate by taylor expansion to the second order where and depend on the first two derivatives of respectively and evaluated at .thus , where and .we can then use the following fisher scoring algorithm to find the gaussian approximation .starting with , the -th iteration is given by * find the solution from * update and using and repeat ( i ) .after the process converges , say at , we get the gaussian approximation which is the density function of the normal distribution we can then calculate by maximizing using the approximation .now we consider two types of prediction problems .first suppose that we have already observed some data for a subject , say observations in the -th batch , and want to obtain prediction at other points .this can be for one of the batches or a completely new one .the observations are denoted by which are collected at .the corresponding input vectors are , and we also know the subject - based covariate .it is of interest to predict at a new point for the -th subject given the test input .secondly we will assume there are no data observed from the subject of interest except the subject - based covariate and want to predict at a new point with the input .we use to denote all the training data and assume that the model itself has been trained ( i.e. all the unknown parameters have been estimated ) by the method discussed in the previous section .the main purpose in this section is to calculate and , which are used as the prediction and the predictive variance of .we now consider the first type of prediction .let be the underlying latent variable at , then ( for convenience we ignore the subscript ) and satisfy , and the expectation of conditional on is given by ( [ hmi ] ) : it follows that = \int h({\mbox{{\boldmath }}}{\hat{{\mbox{\boldmath }}}}^t{\mbox{\boldmath }}(t^ * ) + \tau^ * ) p(\tau^*|{\mbox{ } } ) d\tau^*. \label{pred_mean0}\ ] ] a simple method to calculate the above expectation is to approximate using a gaussian approximation as discussed around equation , that is , since it is assumed that both and come from the same gaussian process with covariance kernel , we have }\ ] ] where is the covariance matrix of , and is a vector of the covariances between and .thus , where and . from the discussion given in the last paragraph in section 2.2, we have the integrand in is therefore the product of two normal density functions . it is not difficult to prove ( see the details in appendix b of the supplementary materials ) that is still a normal density function then can be evaluated by numerical integration . to calculate , we use the formula : + { \mbox{var}}[{\mbox{e}}(z^*|\tau^*,{\mbox{ } } ) ] .\label{var1}\ ] ] from the model definition , we have = { \mbox{e}}\big[{\mbox{e}}(z^*|\tau^*,{\mbox{}})\big]^2 - \big({\mbox{e}}[{\mbox{e}}(z^*|\tau^*,{\mbox{}})]\big)^2 \nonumber\\ = & \int \big [ h({\mbox{{\boldmath }}}\hat{b}^t { \mbox{\boldmath }}(t^ * ) + \tau^*)\big]^2 p(\tau^*|{\mbox{ } } ) d\tau^ * - \big[{\mbox{e}}(z^*|{\mbox{}})\big]^2 , \label{var2 } \end{aligned}\ ] ] and = \int { \mbox{var}}(z^*|\tau^*,{\mbox{ } } ) p(\tau^*|{\mbox{ } } ) d\tau^ * = \int b''(\hat{\alpha}^ * ) a(\phi ) p(\tau^*|{\mbox{ } } ) d\tau^ * , \label{var3}\ ] ] where is a function of , and is given by .thus and can also be evaluated by numerical integration .the posterior density in is obtained based on the gaussian approximation to .it usually gives quite accurate results .the methods to improve gaussian approximation were discussed in .they can also be used to calculate from .an alternative way is to use the first integral in to replace in and perform a multi - dimensional integration using , for example , laplace approximation ; see appendix c of the supplementary materials for the details .the second type of prediction is to predict a completely new batch with subject - based covariate .we want to predict at . in this case , the training data are the data collected from the batches . since we have not observed any data for this new batch , we can not directly use the predictive mean and variance discussed above .a simple way is to predict by using , i.e. ignoring in .this approach however does not use the information of , the observed functional covariates .alternatively as argued in , batches to actually provide an empirical distribution of the set of all possible subjects .a similar idea is used here .we assume that , for , if we assume that the new batch or belongs to the -th batch , we can calculate the conditional predictive mean by , formulated by the predictive mean in and the predictive variance in can be calculated as if the test data belong to the -th batch . hereboth and are used .based on the above empirical assumption , the prediction of the response for the test input at in a completely new subject is and the predictive variance is ^{2 } - \big[{\mbox{e}}(z^*|{\mbox{}})\big]^2 .\label{newpvar}\ ] ] we usually use the equal weights , i.e. for . in general these batches may not provide equal information to the new batch . in this casevarying weights may be considered ; see more discussion in .the consistency of gaussian process functional regression method involves two issues .one is related to the common mean in and the other is related to the curve itself ( or a new one ) .the common mean structure is estimated from the data collected from all subjects , and has been proved to be consistent in many functional linear models under suitable regularity conditions ( see * ? ? ?* ; * ? ? ?this paper focuses on the second issue , the consistency of to , one of the key features in nonparametric regression .this kind of problems for gpr related models have received increasing attention in recent years , see for example , and . considered the posterior consistency of gaussian process prior for normal response , proved the posterior consistency of gaussian process prior for nonparametric binary regression no matter what the mean function of gaussian process prior is set to , and extended the result to poisson distribution .but the consistency for general exponential family distributions is yet to be investigated .meanwhile , proved the information consistency via a regret bound on cumulative log loss . generally speaking ,if the sample size of the data collected from a certain curve is sufficiently large and the covariance function satisfies certain regularity conditions , the prediction based on a gpr model is consistent to the real curve , and the consistency does not depend on the common mean structure or the choice of the values of hyper - parameters involved in covariance function ; see more detailed discussion in . in this section, we discuss the information consistency and extend the result of to a more general context such as following poisson distribution which has not been covered in the literature .similar to other gpr related models , the consistency of to depends on the observations collected from the -th curve only .we assume that the underlying mean function for the -th curve , denoted by , is known .the case where the mean function is estimated from data is discussed in the supplementary materials .for ease of presentation we omit the subscript in the rest of the section and denote the data by at the points , and the corresponding covariate values where are independently drawn from a distribution .let .we assume that is a set of samples taking values in and follows a distribution in exponential family , for an inverse link function , and the underlying process .therefore , the stochastic process induces a measure on space .suppose that the hyper - parameter in the covariance function of is estimated by empirical bayesian method and the estimator is denoted by .let be the true underlying function , i.e. the true mean of is given by .denote then is the bayesian predictive distribution of based on the gpr model .note that depends on since the hyper - parameter of is estimated from the data .it is said that achieves _ information consistency _if \big ) \rightarrow 0 \quad\text { as } n\rightarrow \infty , \label{inf : con}\ ] ] where denotes the expectation under the distribution of and $ ] is the kullback - leibler divergence between and , i.e. =\int p_0(z)\log \frac{p_0(z)}{p_{gp}(z)}dz.\ ] ] * theorem 1 : * under the ggpfr models and and the conditions given in lemma 1 of the supplementary materials , the prediction is information consistent to the true curve if the rkhs norm is bounded and the expected regret term .the error bound is specified in . * remark 1 . *the condition in lemma 1 can be satisfied by a wide range of distributions , such as normal distribution where , binomial distribution ( with the number of trials ) where and poisson distribution where . * remark 2 . *the regret term depends on the covariance function and the covariate distribution .it can be shown that for some widely used covariance functions , such as linear , squared exponential and matrn class , the expected regret terms are of order ; see for the detailed discussion . * remark 3 .* lemma 1 requires that the estimator of the hyperparameter is consistent . in appendix d of the supplementary materials we proved that the estimator by maximizing the marginal likelihood based on laplace approximation satisfies this condition when the number of curves and the number of observations on each curve are sufficiently largethis implies that the information consistency in ggpfr models is achieved for the covariance functions listed in remark 2 .a more general asymptotic analysis is to study the convergence rates of both the mean function estimation and the individual curves prediction when the number of curves and/or the number of observations on each curve tend to infinity , as discussed in for the maximum likelihood estimators of the parameters in mixed - effects models .the research along this direction is worth further development .in this section we demonstrate the proposed method with serveral examples .we first use simulated data and then consider the paraplegia data discussed in section 1 .more simulated and real examples are provided in the supplementary materials . *( i ) simulation study . * the true model used to generate the latent process is , where , for each , s are equally spaced points in and is a gaussian process with zero mean and the squared exponential covariance function defined in with , and . in this example , the covariate is the same as .the observations follow a binomial distribution with sixty curves , each containing data points , are generated and used as training data .we use a ggpfr model with binomial distribution and logit link function : where follows a gpr model .cubic b - spline approximation is used to estimate the mean curve , where the knots are placed at equally spaced points in the range and the number of basis functions is determined by bic which is given by bic with being the total number of parameters . a gaussian approximation method as specified around is used to calculate the empirical bayesian estimates of and .table_parameter ] lists the average estimates of the hyper - parameters for , 40 and 60 for ten replications .the empirical bayesian estimates are closer to the true values with increasing .the estimates of mean curve for different along with the true mean curves are presented in the left panels of figure [ simu_mean_curves ] . as discussed in section 3 ,the consistency of to depends on the observations obtained from all training curves .the figures show that the estimated mean curves by the ggpfr method are very close to the true one even for the case of .[ [ para_fig11]interpolation].,title="fig : " ] [ [ para_fig12]interpolation].,title="fig : " ] + [ [ para_fig13]extrapolation].,title="fig : " ] [ [ para_fig14]extrapolation].,title="fig : " ] + [ [ figbetat]].,title="fig : " ] | in this paper we propose a generalized gaussian process concurrent regression model for functional data where the functional response variable has a binomial , poisson or other non - gaussian distribution from an exponential family while the covariates are mixed functional and scalar variables . the proposed model offers a nonparametric generalized concurrent regression method for functional data with multi - dimensional covariates , and provides a natural framework on modeling common mean structure and covariance structure simultaneously for repeatedly observed functional data . the mean structure provides an overall information about the observations , while the covariance structure can be used to catch up the characteristic of each individual batch . the prior specification of covariance kernel enables us to accommodate a wide class of nonlinear models . the definition of the model , the inference and the implementation as well as its asymptotic properties are discussed . several numerical examples with different non - gaussian response variables are presented . some technical details and more numerical examples as well as an extension of the model are provided as supplementary materials . key words : covariance kernel , exponential family , concurrent regression models , nonparametric regression . * author s footnote * b. wang is lecturer in statistics , department of mathematics , university of leicester , leicester le1 7rh , uk ( e - mail : ` bw77.ac.uk ` ) . j. q. shi is reader in statistics , school of mathematics and statistics , newcastle university , newcastle ne1 7ru , uk ( e - mail : ` j.q.shi.ac.uk ` ) . the authors thank the associate editor and the reviewers for their constructive suggestions and helpful comments . |
a reciprocal lattice vector ( rlv ) has the form , where are primitive translations of the reciprocal lattice .the integer `` indices '' were originally introduced as the miller indices , which are reciprocals of the intersection points ( in units of , etc . ) , of a crystal lattice plane with the axes along the primitive translation vectors of the direct lattice .the `` 4-index '' notation in a hexagonal crystal denotes an rlv with .the extra , or third , index in is redundant .this index must equal the negative sum of the first two indices .the notation restores symmetry between equivalent directions which is lost if the third index is omitted .this notation has long been used by crystallographers , dating back at least to fedorov in 1890 .an extra miller index is natural in hexagonal symmetry , because , rather than two -plane axes at 120 , the three symmetrical axes , shown in fig .1 , are natural .the plane of atoms intersects all three axes .the reciprocal intersection points are forced by trigonometry to obey the rule .the proof is hinted in fig . 1 .even before bragg scattering was observed and explained ( 1912 - 13 ) , the mathematical concept of a dual or reciprocal lattice was used .after 1912 , physicists recognized the rlv as an x - ray momentum transfer .the `` indices '' which label planes seem secondary unless we are specifically studying an atomic plane .the redundant index may seem only a nuisance .since the advantages of the 4-index notation are not always understood , it is common to omit the third index , as would be done in crystal systems which lack rotations .this note is written in the belief that , once the underlying idea is clearly understood , the four index notation is natural .it can be used to some advantage to label rlv s and also to label directions ] and without commas .the term `` index '' will always refer to a dimensionless version of a component of a vector .-plane of a hexagonal crystal , showing and axes , the third axis ( ) . in redis shown the line of intersection of a plane of atoms with the plane .this red plane intersects the axis at , the axis at , and the axis at .it is always possible to choose the origin such that the intersection is at .then the geometry guarantees that .( b ) plaquette of 2-d hexagonal crystal , showing primitive translations of the direct lattice and of the reciprocal lattice.,width=453 ] in dirac notation , indicates a column vector , while indicates a row vector .however , in this paper notation switches between two and three dimensions and between 2-vector , 3-vector and 4-vector systems . the dirac notation is mostly avoided to reduce ambiguity .when a vector is written alone , it can be assumed a column vector . when written with another vector in a dot product ,the left vector is a row vector and the right vector a column vector .when written as a dyad , the left vector is a column vector and the right vector a row vector . when written in a line of text with commas , the vector is probably a column vector written sideways to save space .the 4-vector `` basis '' is overcomplete .it has both conventional and unconventional features .the basis vectors are , in 4-component notation note that the sum of the first three indices is always zero , which follows from the fact that the first three unit vectors add to zero , .it is perhaps disconcerting that the unit vectors do not `` look like '' unit vectors in the 4-vector notation , but this is a consequence of overcompleteness .for example , the second component of the 4-vector that represents , is , according to eq.([eq:4v ] ) , .the notation makes symmetry explicit .the six translations have their first three indices chosen by the rule , organize three integers chosen from and , in all ways such that they add to zero .because of the symmetrical mathematics , the scalar product ( _ e.g. _ ) in 4-vector form is simple but unconventional : this can easily be verified , and will be explained in sec .when one drops the redundant third index from the 4-index notation , the remaining indices express vectors in a non - orthogonal basis .there is perfectly good mathematics behind this , but it hides symmetry and simplicity . in a hexagonal system , these disadvantages are removed by the 4-index system . in systems of lower symmetry , where translation vectors are not at and to each other , such simplification is not available . in this section, the mathematics of non - orthogonal basis sets is reviewed .to simplify , the -direction is now omitted .the discussion thus refers to crystallography of hexagonal crystals in two dimensions .the third dimension returns in sec .iv . in the 2-d space of the plane ,any two vectors that are not parallel or antiparallel can be chosen for a basis .the most obvious choices are the primitive translations and , or alternately , the primitive translations and of the reciprocal lattice , defined as , and similarly for . is the volume of the three - dimensional unit cell , .then we have the usual vector relations and .these relations indicate that the basis sets and are bi - othogonal .it not always mentioned in texts that the primitive direct lattice vectors and the primitive rlv s are examples of the mathematical notion of bi - orthogonal basis vectors .the important property is completeness : any vector can be expressed as a unique linear combination .note that , where equals in hexagonal systems .the coefficients and are found by solving a linear system in and .a nice aspect of bi - orthogonality is that it diagonalizes the system and gives simple formulas for the coefficients , namely , .it is arbitrary which basis ( direct space or reciprocal ) is taken to be primary and which to be dual .thus , an arbitrary vector has an alternate representation , the coefficients are , .the inner product of two vectors is not given simply by , but involves also the cross term .the simple formula is , where the row vector is expressed in the basis dual to the one chosen for the column vector .an equivalent formula is .a compact mathematical representation of these relations is given by the equation , or by the alternate equation . herethe notation means a 2-vector , and is the unit matrix .if written as a 2-component column vector , the basis should either be orthonormal , or if non - orthonormal , one has to be careful to use the direct and dual basis for the column vector and the row vector . in dyad notation ,the relations are this formula is called `` the completeness relation '' or , equivalently , `` the decomposition of unity . ''although this gives elegant formulas for inner products in non - orthogonal basis sets , these formulas are not likely to be used unless the vectors and belong separately , one to direct , and the other to reciprocal space .then the formulas are obvious .one has no trouble realizing that is equal to .for two dimensional vectors , or the -plane components of 3-d vectors of an hexagonal crystal , the overcomplete symmetrical basis is the three vectors which lie at to each other , as shown in fig.[fig : hex ] .the key relationship is the dyad formula this decomposition of unity is a very nice alternative to eq.([eq : unityaa ] ) .it is no longer necessary to have dual sets of direct and reciprocal lattice vectors .the three vectors are self - dual . the scalar product of any two 2-vectors can be written as this formula can be written in an overcomplete 3-vector notation as where the usual array multiplication rule is obeyed .unlike the case of the biorthogonal basis , here there is no need for care about whether or is in direct or reciprocal space , or whether the primary or the dual basis is implied .the formula works for any two vectors .it is worth emphasizing one subtlety . in the bi - orthogonal basis ,when a column vector has indices ] or in reciprocal space and indexed as let us express the reciprocal lattice vectors in the overcomplete symmetric notation .first , recall that since , we have .then we find , , and .therefore we have the set of six symmetry - related smallest rlv s are simply all permutations of the three indices .the general reciprocal lattice vector is written in various ways , as = \frac{2\pi}{a}\left(\begin{array}{c}n_a\\n_b\\-n_a - n_b\end{array}\right ) \rightarrow ( n_a n_b \\overline{n_a+n_b } ) .\label{eq : g33}\ ] ] it is amazing how similar the bi - orthogonal representation ( eq.[eq : g2 ] ) is to the overcomplete symmetric representation ( eq.[eq : g33 ] ) for rlv s .they involve different coordinate systems and rules , yet the former derives from the latter by just dropping the extra third index , and the latter from the former by adding an index which is the negative sum of the first two indices .the additional advantages of the extra index representation are that it is completely explicit ( if is included , the vector is completely specified ) , and there is no ambiguity about the direct versus dual parts of the bi - orthogonal representation .it is best to think of eq.([eq:4v ] ) as a way of representing the vector but not to think of the components of the 4-vector as if they had meaning similar to the ordinary 3-component notation . in ordinary vector notation , if the vector is denoted ] have the formulas + \frac{1}{c}n_4 \hat{e}_4 \right ) .\label{eq : vyes}\ ] ] = \left(\begin{array}{r } n_1 a \\n_3 a \\ n_4c\end{array}\right ) \rightarrow \left(\frac{2a}{3}[n_1\hat{e}_1 + n_2 \hat{e}_2 + n_3\hat{e}_3 ] + cn_4 \hat{e}_4 \right ) .\label{eq : vyes2}\ ] ] note the unconventional factor 2/3 .when written as an additive sum of primitive vectors proportional to , an arbitrary additive constant can be added to or with no algebraic or notational error .for example , + v_4 c\hat{e}_4 \ne \left(\begin{array}{c } ( v_1+c)a \\ ( v_2+c)a \\ ( v_3+c)a \\ v_4 c\end{array}\right ) \label{eq : vno}\ ] ] when written in overcomplete 4-vector notation , are fixed numbers , necessarily adding to zero , and therefore with no arbitrariness. the constant can not be added , even though the vector relation is true . the inner product of two vectors , in 4-vector notation ,is this is just an alternative way of writing eq.([eq : dotprod ] ) , that emphasizes the unconventional metric .a factor occurs in the first three diagonal entries .the metric is positive , so the inner product is safely defined .eq.([eq:3dotprod ] ) holds for all vectors provided the indices and are written in full rather than abbreviated index form . for the special case where one of is in direct and the other in reciprocal space , the inner product ,_ modulo _ , also has this form , _i.e. _ =2\pi[(2/3)(hs+kt+ir)+lu] ] .the literature definition is different .it requires first writing .the extra term is zero but is added to make the sums of the coefficients of , , and add to zero . by the literature definition ,this vector is indexed as ] .the first three indices have been reduced by .it never seems to be mentioned that the definition was changed between direct and reciprocal space .the factor is now incorporated into the definition of the first three indices of ] and a reciprocal space vector is , without the extra .the penalty is that inner products of two direct space vectors or two reciprocal space vectors must be computed with awkwardly different rules , whereas the unified definition offered here gives a simple unified ( but unconventional ) rule . in order to retain the usual definitions, one could accept as a compromise the _ ad hoc _ rule that whenever dimensionless real space indices $ ] are written , they incorporate an extra factor beyond eq.([eq:4v ] ) , in the first three indices . however , when written out in full column vector notation , there is no need for index notation , and such a compromise would be unwise .eq.([eq:4v ] ) should be used , and the inner product rule eq.([eq : dotprod ] ) applies .overcomplete basis sets are not abnormal in physics .they are used frequently for infinite - dimensional problems .a quantum harmonic oscillator , for example , has an infinite complete orthonormal basis of eigenstates , but the `` coherent state '' representation , which is overcomplete , is often preferable . in finite - dimensional vector problems ,hexagonal crystallography is not unique ; symmetry and orthonormality may compete and suggest an overcomplete symmetric representation .an example is electronic -states , where cubic symmetry lifts 5-fold degeneracy into a 3-fold degenerate t manifold ( spanned by orthonormal functions , , and ) and the 2-fold degenerate e manifold ( spanned by orthogonal functions and . ) the t orthogonal basis is nicely adapted to the 4-fold rotations of cubic symmetry , whereas these same rotations mix the orthogonal e functions in an ugly way .a cure is an overcomplete non - orthogonal basis such as , , and .the mathematics of this representation is exactly the same as the symmetric overcomplete representation described here for hexagonal crystals .i thank my collaborators in the solar water splitting simulation team ( swassit ) who helped me understand the surface of wurtzite materials . m. blume and a.g. abanov made valuable comments .i thank the cfn of brookhaven national laboratory for hospitality .this work was supported by doe grant defg0208er46550 .99 e. fedorov , zeits . kristall . *17 * , 615 ( 1890 ) .j. d. h. donnay , am .mineralogist * 32 * , 52 ( 1947 ) .j. w. gibbs and e. b. wilson , _ vector analysis _ , yale university press , new haven , 1901 ) .j. w. edington , _ practical electron microscopy in materials science _ , ( van nostrand reinhold , new york , 1976 ) . h. m.otte and a. g. crocker , phys .* 9 * , 441 ( 1965 ) .j. w. negele and h. orland , _ quantum many - particle systems _ ( addison - wesley , redwood city , 1988 ) pp . 20 - 25 .v. perebeinos and p. b. allen , phys .* 85 * , 5178 ( 2000 ) . | a four index notation ( _ e.g. _ ) is often used to denote reciprocal lattice vectors or crystal faces of hexagonal crystals . the purposes of this notation have never been fully explained . this note clarifies the underlying mathematics of a symmetric overcomplete basis . this simplifies and improves the usefulness of the notation . |
magnetograms provide the radial magnetic field on the visible surface of the sun .the actual measurement is for the line - of - sight component of the magnetic field , which is then transformed into the radial component assuming an ( approximately ) radial field near the solar surface .as the sun rotates , the individual magnetograms can be combined into a synoptic magnetogram that covers the whole spherical surface .synoptic magnetograms are provided by many observatories , including wilcox solar observatory ( wso ) , the michelson doppler imager ( mdi ) instrument on the solar and heliospheric observatory ( soho ) , the global oscillation network group ( gong ) , solar dynamic observatory ( sdo ) and the synoptic optical long - term investigations of the sun ( solis ) observatory . todays magnetograms contain hundreds to thousands of pixels along each coordinate direction .these magnetograms can be used to extrapolate the magnetic field into the solar corona .the simplest model assumes a current - free , in other words potential , magnetic field that matches the radial field of the magnetogram on the surface , while it satisfies a simple boundary condition at the outer boundary at some radial distance .the outer boundary condition is usually taken at ( solar radii ) , and a purely radial field is assumed at this `` source surface '' .mathematically the problem is the following : given the magnetogram data that defines the radial component of the magnetic field as at , find the scalar potential so that here ] are the co - latitude and longitude coordinates , repectively .once the solution is found , the potential field solution is obtained as and it will trivially satisfy both the divergence - free and the current - free properties we note that the current is only zero inside the domain . if the solution is continued out to with a purely radial magnetic field , there will be a finite current at , on the other hand , the divergence will be zero for all .the potential field solution is often obtained with a spherical harmonics expansion . herewe briefly summarize the procedure in its simplest possible form .the base functions are the spherical harmonic functions multiplied with an appropriate linear combination of the corresponding radial functions and so that the boundary condition is satisfied : the indexes and ( ) are the integer degree and order of the spherical harmonic function , respectively .the functions are solutions of the laplace equation ( [ eq : laplace ] ) , satisfy the boundary condition at , and they form an orthogonal base in the coordinates .the magnetic potential solution can be approximated as a linear combination of the base functions where is the highest degree considered in the expansion and the harmonics is not included , as it corresponds to the monopole term .the coefficients can be determined by taking the radial derivative of equation ( [ eq : phi ] ) and equating it with the magnetogram radial field at : \sum_{m =- n}^{n } f_{nm } y_{nm}(\theta,\phi ) \label{eq : m}\ ] ] exploiting the orthogonality of the base functions , we can take the inner product with to determine as } \int_0^\pi d\theta \sin \theta \int_0^{2\pi } d\phi m(\theta,\phi ) y_{nm}(\theta,\phi ) \label{eq : integral}\ ] ] where the coefficient results from the normalization of the spherical harmonics .an alternative approach of obtaining the harmonic coefficients is to employ a ( least - squares ) fitting procedure in equation ( [ eq : m ] ) .this is much more expensive than evaluating the integral in ( [ eq : integral ] ) , but it can be more robust if the magnetogram does not cover ( well ) the whole surface of the sun . using the spherical harmonic coefficients the potential can be determined on an arbitrary grid using ( [ eq : phi ] ) and the magnetic field can be obtained with finite differences .alternatively , one can calculate the gradient of the base functions analytically and obtain the magnetic field as for .spherical harmonics provide a computationally efficient and very elegant way of solving the laplace equation on a spherical shell .however , one needs to be cautious of how the integral in equation ( [ eq : integral ] ) is evaluated , especially when a large number of harmonics are used in the series expansion .we will use the gong synoptic magnetogram for carrington rotation 2077 ( cr2077 , from november 20 to december 17 , 2008 ) as an example to demonstrate the problem .the magnetogram contains the radial field on a latitude - longitude grid on the solar surface .the grid spacing is uniform in ( or sine of the latitude ) and in longitude .figure [ fig : magnetogram ] shows the radial field .section 2 discusses the naive and more sophisticated ways of obtaining the potential field solution with spherical harmonics .section 3 describes an alternative approach using an iterative finite difference .the various methods are compared in the final section 4 , where we also demonstrate the ringing effect that can arise in the spherical harmonics solution , and we draw our conclusions .to turn the analytic prescription given in the introduction into a scheme that works with real magnetograms , one has to pick the maximum degree , and evaluate the integrals in equation ( [ eq : integral ] ) for each pair of and up to the highest order .the resulting coefficients can be used to construct the 3d potential magnetic field solution at any given point using equation ( [ eq : bpot ] ) .the simplest approximation to equation ( [ eq : integral ] ) is a discrete integral using the original magnetogram data : } \sum_{i=1}^{n_{\theta } } \sum_{j=1}^{n_{\phi } } ( \delta\cos\theta)_i ( \delta\phi)_j m_{i , j } y_{nm}(\theta_i,\phi_j ) \label{eq : sum}\ ] ] where the is the radial field in a pixel of the by sized magnetogram .the pixel is centered at the coordinates , and the area of the pixel is given by .unfortunately , the uniform mesh used by most of the magnetograms is not at all optimal to evaluate the integral in equation ( [ eq : integral ] ) .in fact this procedure will only work with maximum order that is much less than .figure [ fig : legendre ] shows the associated legendre polynomial discretized in different ways .the red curve shows the discretization on 180 grid points that are uniform in . clearly the red curve is a very poor representation near the poles , where is important , because the amplitude of the legendre polynomial is actually largest near the poles .this means that the orthonormal property is not satisfied in the discrete sense , and the coefficients obtained with equation ( [ eq : sum ] ) are very inaccurate .the legendre polynomial can be represented much better on a uniform grid ( shown by the green curve in figure [ fig : legendre ] ) , as we will discuss below .a clear signal of this problem is that the amplitudes of the higher order spherical harmonics are not getting smaller with increasing indexes and , i.e. , the harmonic expansion is not converging .the black line in figure [ fig : sn ] shows the amplitudes which is oscillating wildly for for this magnetogram .the oscillations are almost exclusively due to the coefficients , the coefficients are well behaved .this plot can be directly compared with figure 15 in , where the power spectrum is more - or - less exponentially decaying .we believe that the reason is that these authors used a least - square fitting to the line - of - sight ( los ) magnetic field instead of calculating the spherical harmonics from the radial field as shown above .while the two methods are identical analytically ( assuming that the los and radial fields correspond to the same solution ) , the use of least - square fitting mitigates the lack of orthogonality among the discretized legendre funtions , while the naive approach described above heavily relies on the orthogonality property .given the non - converging series expansion , the resulting potential field will be very inaccurate in the polar regions and will have essentially random values depending on the number of spherical harmonics used .this is demonstrated by figure [ fig : naive_fft ] that shows the radial magnetic field reconstructed with various number of harmonics using the original magnetogram grid .one would expect the radial component of the potential field to reproduce the magnetogram shown in figure [ fig : magnetogram ] .instead , we find that the solution deviates strongly in the polar region if the harmonics expansion is continued above . for ( or lower )the solution looks reasonable , but strongly smoothed due to the insufficient number of harmonics .this is most obvious around the active regions in the top panel of figure [ fig : naive_fft ] .we note that these numerical errors are not related or comparable to the observational uncertainties of the magnetograms , which are usually also quite large in the polar regions .the observational uncertainties are essentially unavoidable but are within some well - understood range . on the other hand ,these numerical artifacts are definitely avoidable while the errors are essentialy unbounded if one uses too many harmonics .one can get much more accurate results if the magnetogram is remeshed to a grid that is uniform in the co - latitude , has an odd number of nodes , and contains the two poles and .in fact this is the standard grid used in spherical harmonics transforms ( e.g. ) and it is often referred to as using the chebyshev nodes , since the uniform grid points correspond to the chebyshev nodes in the original coordinate which is the argument of the legendre polynomials .figure [ fig : legendre ] shows that the legendre polynomial is much better represented on the uniform grid than on the uniform grid . remeshingthe magnetogram introduces some new adjustable parameters into the procedure : the number of grid cells on the new mesh , and the interpolation procedure .if the remeshing is done with the same number of grid points as is in the original magnetogram grid , the latitudinal cell size at the equator will be a factor of larger than in the uniform- grid .on the other hand , the uniform- grid will contain many more points than the original in the polar regions , so the interpolation procedure may create some unwanted artifacts . to maintain the resolution of the original data around the equator, we set to rounded to an odd integer . for the remeshing we chose a simple linear interpolation procedure , and it works satisfactorily , but one could certainly use higher order interpolation procedures , such as splines . before doing the interpolation, we add extra grid cells corresponding to the north and south poles of the magnetogram grid , and the values at these two extra cells are set as the average of the pixels around the poles : the co - latitude coordinates of the uniform- mesh are for .we use simple linear interpolation from the extended magnetogram mesh to the uniform mesh : where the index is determined so that and finally the spherical harmonics coefficients are determined with the integral approximated as } \sum_{i=1}^{n'_{\theta } } \sum_{j=1}^{n_{\phi } } \epsilon_i w_i(\delta\phi)_j m'_{i , j } y_{nm}(\theta'_i,\phi_j ) \label{eq : sum2}\ ] ] where and for all other indexes .the coefficients are the clenshaw - curtis weights defined as where and and for all other indexes .we note that for order of 10 or more where and are the indexes of the neighboring cells , or the cell itself at the poles . using the proper grid allows us to use larger number of spherical harmonics . is limited only by the and alias - free conditions . for our example magnetogram , we remesh it a to uniform- grid , and we can obtain accurate solutions up to degree spherical harmonics .figure [ fig : remeshed_fft ] shows the potential field solution obtained with the remeshed magnetogram grid with . compared with the naive method ,the solution is much more reasonable in the polar regions .there is some smoothing when compared to the magnetogram shown in figure [ fig : magnetogram ] , most obvious near the active regions .while the remeshing is definitely a big improvement over using the original magnetogram , it would be nice to be able to use the original magnetogram data without remeshing and interpolation .the next section shows that this can be easily achieved with a finite difference solver .the laplace equation ( [ eq : laplace ] ) with the boundary conditions ( [ eq : innerbc ] ) and ( [ eq : outerbc ] ) can be solved quite easily with an iterative finite difference method .the advantage of finite differences compared with spherical harmonics is that the boundary data given by the magnetogram directly effects the solution only locally , while the spherical harmonics are global functions , and their amplitudes depend on all of the magnetogram data .if the magnetogram contains large discontinuities , we expect the finite difference scheme to be better behaved .the finite difference method has advantages if the solution is to be used in a finite difference code on the same grid , because one can guarantee zero divergence and curl for the magnetic field in the finite difference sense .the solution obtained with the spherical harmonics has zero divergence and curl analytically , but not on the finite difference grid , which may severly underresolve the high order spherical harmonic functions in some regions ( see figure [ fig : legendre ] ) .the finite difference method was applied to the solar potential magnetic field problem as early as 1976 ( ) , but the method was limited by the computational resources available at the time . solving a 3d laplace equation on today s computers is an almost trivial problem .we implemented the new finite difference iterative potential - field solver ( fdips ) code in fortran 90 .the serial version does not require any external libraries , while the parallel version uses the message passing interface ( mpi ) library for communication .fdips can solve the laplace equation on a spherical grid to high accuracy on a single processor in less than an hour .the parallel code can solve the same problem in less than 5 minutes on 16 processors .we briefly describe the algorithm in fdips .we use a staggered spherical grid : the magnetic field is discretized on cell faces while the potential is discretized at the cell centers .we use one layer of ghost cells to apply the boundary conditions so the cell centers are located at with , and .the and coordinates of the real cells are given by the magnetogram , while the ghost cell coordinates are given by , , , and .we allow for a non - uniform radial grid extending from to , but for sake of simplicity in this paper a uniform radial grid is used with with .the radial magnetic field components are located at the radial cell interfaces at where for , and .similarly the latitudinal components are at with , and .note that the interface is taken half - way in the coordinate and not in , because this makes the cells equal area when the magnetogram is given on uniform grid .finally the longitudinal field components are located at where for .the staggered discretization keeps the stencil of the laplace operator compact and it makes the boundary condtions relatively simple .the magnetic field is obtained as a discrete gradient of : note the factor in the derivative for the uniform grid . for uniform grid thisis replaced with .the divergence of the magnetic field , i.e. the laplace of is obtained as again is used for the uniform grid , while on the uniform grid this is replaced with .the magnetogram boundary condition is applied by setting the inner ghost cell as where is the magnetogram with the average field ( i.e. the monopole due to observation errors ) removed .the zero potential at is enforced by setting the ghost cell as the boundary conditions at the poles are a bit tricky .cells and are on opposite sides of the north pole if .therefore the ghost cells in the direction are set as and .we note here that is assumed to be an even number .the periodic boundaries in the direction are simple : and .we need to find that satisfies the discrete laplace equation ( [ eq : laplacenum ] ) with the boundary conditions applied via the ghost cells .the initial guess is which provides a non - zero residual because of the inhomogeneous boundary condition at the inner boundary applied by equation ( [ eq : innerbcnum ] ) .we use this residual with a negative sign as the righ - hand - side of the poisson equation , and use as the new homogeneous inner boundary condition instead of ( [ eq : innerbcnum ] ) .we use the krylov - type iterative method bicgstab to find the solution .the linear system is preconditioned with an incomplete lower - upper decomposition ( ilu ) preconditioner to speed up the convergence .we use ilu(0 ) with no fill - in compared to the original matrix structure , so the preconditioner is a diagonal matrix , but its elements depend on all elements of the original matrix . we have implemented a serial as well as a parallel version of the algorithm . in the parallel versionthe preconditioner is applied separately in each subdomain .fdips finds an accurate ( down to relative error ) solution on a grid in less than 1000 iterations . even running serially ,this takes less than an hour on today s computers .once the solution is found in terms of the discrete potential , we apply the original boundary conditions including ( [ eq : innerbcnum ] ) and calculate the magnetic field with equation([eq : gradient ] ) .the divergence of the magnetic field will be zero to the accuracy of the poisson solver .the curl of the magnetic field will be zero in a finite difference sense simply because it is constructed as the discrete gradient of the potential .the boundary condition at the inner boundary is also satisfied exactly : .averaging and to the position at the outer boundary also gives exactly zero tangential fields due to the equations ( [ eq : outerbcnum ] ) and ( [ eq : gradient ] ) .depending on the application , we may interpolate the potential magnetic field onto a collocated grid , or use it on the original staggered grid .figure [ fig : fdips ] shows the solution of the magnetic field obtained with the finite difference solver fdips on a grid .since we use the same uniform- grid as the magnetogram , the obtained radial field is identical with the magnetogram at .the tangential components agree well with the remeshed spherical harmonics solution shown in figure [ fig : remeshed_fft ] .it took 1166 iterations to get a solution with a relative accuracy of .the run time was almost exactly one hour on a 2.66 ghz intel cpu .we have discussed various ways to obtain the potential field solution based on solar magnetograms .while spherical harmonics provide an efficient and elegant method , there are some subtle restrictions that require attention .if one wants to use many spherical harmonics ( the same order as the number of magnetogram pixels in the colatitude direction ) , the magnetogram data on the grid has to be remeshed onto a uniform- grid with points , must be an odd number , and the new grid must include both poles . after the remeshing the maximum degree of harmonics is only limited by the anti - alias limit to , .we used a simple linear interpolation for the remeshing .the remeshing can be avoided by the use of a 3d finite difference scheme .one can use the original magnetogram grid , and the only freedom is in choosing the radial discretization .the finite difference scheme provides a solution that is fully compatible with the boundary conditions , and the solution has zero divergence and curl in the finite difference sense .figure [ fig : radial ] compares the solutions obtained with the three methods along the radial direction for a fixed latitude and longitude .the spherical harmonics series were truncated at for both the naive and remeshed methods .the naive spherical harmonics algorithm gives incorrect results close to the solar surface where the high order harmonics dominate .this is most obvious for the radial component , which is given at by the magnetogram , and it is exactly reproduced by the finite difference scheme .the latitudinal component at is also very different from the values given by the remeshed harmonics and the finite differences .the latter two methods agree reasonably well with each other . for radial distances above all three methodsagree quite well .so far we restricted our example to a gong magnetogram taken at the solar minimum .if one uses an mdi magnetogram during solar maximum , the largest magnetic fields are much stronger ( order of 1000 g ) and the resolution of the magnetogram is much finer ( order of 1000 pixels ) .the finer magnetogram resolution allows going to larger number of harmonics , even when using the original magnetogram grid ( naive approach ) .but the strong and sharp gradients in the magnetogram will bring out another problem with the spherical harmonics approach , the ringing effect .the ringing is due to the so - called gibbs phenomenon : the step - function like magnetogram data results in high amplitude high order harmonics in fourier space .the ringing effect and other artifacts are discussed in great detail by .figure [ fig : ringing ] demonstrates this effect on the resolution mdi magnetogram for carrington rotation 2029 ( from april 21 to may 18 , 2005 ) , with the maximum radial field strength around g .the remeshed harmonics method with is compared with the finite difference method on a grid ( the magnetogram data is coarsened to a grid ) . in the spherical harmonics solutionthe ringing is very clearly visible around the active regions , both in the radial and latitudinal components .the finite difference scheme , on the other hand , shows no sign of ringing in either components .this is obvious for the radial component , which simply coincides with the coarsened magnetogram , but for the latitudinal component it is due to the fact that the finite difference solution of the laplace equation does not suffer from ringing artifacts even for discontinuous boundary data . for the spherical harmonicsapproach the ringing becomes weaker with increased number of harmonics , but it is quite apparent even for ( not shown ) .the results of the remeshed and naive harmonics methods are essentially the same up to , i.e. the ringing is not due to the remeshing of the magnetogram . in terms of computational efficiency ,a good implementation of the spherical harmonics scheme is much faster than the finite difference scheme .in fact , it may be more costly to construct the potential field solution on a 3d grid from the spherical harmonics coefficients than obtaining the coefficients themselves .our fortran 90 code can obtain the spherical coefficients up to , 90 and 120 degrees in 1 , 1.8 and 3.3 seconds , respectively , while the reconstruction of the solution on the grid takes 5 , 12 , and 20 minutes , respectively .all timings were done on a single 2.66 ghz intel cpu .the reconstruction cost can be improved by running the code in parallel , and/or truncating the series in parts of the grid where the higher order harmonics have a negligible contribution .we also note that going beyond about harmonics becomes fairly complicated .the computational cost of the finite difference scheme scales with the number of grid cells and the number of iterations .the number of iterations is fairly constant for multigrid type methods , but for the krylov sub - space schemes it grows with the problem size , although slower than linearly .the finite difference scheme can be sped up by parallelizing the code , which is fairly straightforward for the krylov subspace schemes .since we limit the ilu preconditioning to operate independently on the subdomains of each processor , the preconditioner becomes less efficient as the number of processors increases , which results in an increase in the number of iterations . to minimize this effect , the parallel fdips code splits the grid in the and directions only , so the subdomains in each processor contain the full radial extent of the grid .our experiments confirmed that using this domain decomposition , the number of iterations indeed does not depend much on the number of processors .our largest test so far involves a grid with 30 times more cells than the grids discussed in most of this paper .for the large problem we need about 8,500 iterations to reach the relative accuracy , a factor of 9 increase relative to the smaller problem . using 108 cpu - s , the solution is obtained in about 5.3 hours . despite the various limitations , for some applications the spherical harmonics approach may still be preferred .for example if the solution is needed to obtain a spherical power spectrum of the solar magnetic field .if the solution is to be used in a finite difference code , the finite difference solution is probably preferable .we are using the fdips code to generate the potential field solution as the initial field for our solar corona model .this paper attempts to call the attention of astrophysicists and solar physicists to the limitations and potential pitfalls of using the spherical harmonics approach to obtain a potential field solution .the spherical harmonics representation of the potential field solutions are available from several synoptic magnetogram providers , although the details of the method used to obtain the spherical harmonics is not always clear .a spherical harmonics based pfss package implemented in idl is available as part of the solar - soft library ( http://www.lmsal.com/solarsoft ) .this package uses the magnetogram remeshing technique either onto the chebyshev ( uniform- ) or the legendre collocation points .we are not aware of any publically available code that uses finite differences to solve this particular problem . to allow other researchers to use and compare the two approaches , we make our finite difference code fdips publically available at the http://csem.engin.umich.edu/fdips/ website .schatten , k. j. , wilcox , j. m. & ness , n. f. 1969 , , 6 , 442 altschuler , m. d. , levine , r. h. , stix , m. , & harvey , j. 1977 , , 51 , 345 suda , r. & takami , m. 2001 , math . of comp . , 71 , 703 clenshaw , c. w. & curtis , a. r. 1960 , numerishe mathematik , 2 , 197 potts , d. , steidl , g. , & tasche , m. 1998 , linear algebra appl . , 275 , 433 adams , j. & pneuman , g. w. 1976 , , 46 , 185 van der vorst , h. 1992 , siam j. sci . statist ., 13 , 631 tran , t. 2009 , `` improving the predictions of solar wind speed and interplanetary magnetic field at the earth '' , ph.d .thesis , ucla van der holst , b. , manchester iv , w. b. , frazin , r. a. , vsquez , a. m. , tth , g. , & gombosi , t. i. 2010 , , 725 , 1373 | potential magnetic field solutions can be obtained based on the synoptic magnetograms of the sun . traditionally , a spherical harmonics decomposition of the magnetogram is used to construct the current and divergence free magnetic field solution . this method works reasonably well when the order of spherical harmonics is limited to be small relative to the resolution of the magnetogram , although some artifacts , such as ringing , can arise around sharp features . when the number of spherical harmonics is increased , however , using the raw magnetogram data given on a grid that is uniform in the sine of the latitude coordinate can result in inaccurate and unreliable results , especially in the polar regions close to the sun . we discuss here two approaches that can mitigate or completely avoid these problems : i ) remeshing the magnetogram onto a grid with uniform resolution in latitude , and limiting the highest order of the spherical harmonics to the anti - alias limit ; ii ) using an iterative finite difference algorithm to solve for the potential field . the naive and the improved numerical solutions are compared for actual magnetograms , and the differences are found to be rather dramatic . we made our new finite difference iterative potential - field solver ( fdips ) a publically available code , so that other researchers can also use it as an alternative to the spherical harmonics approach . |
symmetry is ubiquitous in nature and in artificial man - made objects .the detection and characterization of shape symmetry has attracted much attention in recent times , especially within the computer graphics community .although most of the existing literature has focused on the detection of _ extrinsic _ symmetries ; a popular approach being transformation space voting ; there has been steadily growing interest in detection and characterization of _ intrinsic _ symmetries .most recent efforts in intrinsic symmetry detection have focused on detection of global symmetries ; .it is generally recognized that detection of overlapping intrinsic symmetry is a more challenging problem due to the larger search spaces involved in the detection of symmetric regions ( in comparison to global symmetry analysis ) and the determination of symmetry revealing transforms ( in comparison to extrinsic symmetry detection ) .also , overlapping intrinsic symmetry detection and characterization is more important because it is a more generalized problem in nature than extrinsic symmetry detection problem which can be considered as a special case of overlapping intrinsic symmetry detection .we present a formal definition of overlapping intrinsic symmetry .an intrinsic symmetry over a shape is a subregion with associated self - homeomorphisms that preserve all pairwise intrinsic distances . in this paper, we address the problem of intrinsic symmetry detection and characterization of shapes based on their intrinsic symmetries .complex shapes often exhibit multiple symmetries that overlap spatially and vary in form as depicted in figure [ fig : teaser ] . overlapping symmetry analysis enables the construction of high - level representations that enhance the understanding of the underlying shape and facilitate solutions to such problems as shape correspondence , shape editing , and shape synthesis . however ,analysis of overlapping symmetry poses additional challenges as described below .existing approaches to intrinsic symmetry detection , including those based on region growing , partial matching , and symmetry correspondence are not able to extract physically overlapping symmetries .lipman et al . cluster sample surface points from an input shape and use a symmetry correspondence matrix ( scm ) to identify intrinsic symmetry properties of groups of surface points . in their approach , each scm entry measures how symmetric two surface points are based on some measure of intrinsic geometric similarity between the local neighborhoods of the points .xu et al . let surface point pairs vote for their partial intrinsic symmetry and perform intrinsic symmetry grouping using a 2-step spectral clustering procedure .however , their approach lacks the ability to retrieve the final symmetry map which makes characterization of the specific intrinsic symmetry a difficult problem .our key idea behind our approach to intrinsic symmetry detection and characterization is to approach the problem from a shape correspondence perspective and generate the transformation map which can be further used to describe the symmetry space . to this end, we perform two stages of processing where in the first stage , representative symmetric point pairs are identified based on their local geometry and a global distance representation and in the second stage the original transformation is retrieved as a _ map _ to facilitate further characterization of the underlying symmetry .the detected intrinsic symmetries are quite general , as shown in figure [ fig : teaser ] .a fundamental question in symmetry detection is quantifying the extent of symmetry between a pair of points .the primary challenge in identifying potentially symmetric point pairs is to come up with conditions strong enough to adequately constrain the symmetry search space such that the symmetry detection procedure is computationally tractable .we rely on a local criterion , such as geometric similarity , and a global criterion , such as the distance - based symmetry support received by a point pair , to detect and quantify the extent of symmetry between points and .we refer to a point pair which satisfies the local geometric similarity criterion as a _good voter_. the point pairs within the population of _ good voters _ , that enjoy sufficiently strong global distance - based symmetry support are deemed to be _ symmetric _ point pairs .the global symmetry support for point pair is quantified by the number of other point pairs which potentially share the intrinsic symmetry properties of .the global symmetry support is computed using a simple , distance - based symmetry criterion defined over two point pairs within the population of _ good voters _ as shown in figure [ fig : overview ] .the input to the proposed symmetry detection and characterization algorithm is a 3d shape that is approximated by a 2-manifold triangular mesh .the local intrinsic geometry is quantified using the _ wave kernel signature _ ( wks ) whereas the global intrinsic geometric distance measure chosen is the _ biharmonic distance measure _ ( bdm ) .the proposed algorithm consists of a voting procedure on the correspondence space defined over a set of locally symmetric surface point pairs sampled from the input 3d shape , followed by the generation of a functional map ( see figure [ fig : overview ] ) . [ [ correspondence - space - voting ] ] correspondence space voting + + + + + + + + + + + + + + + + + + + + + + + + + + + the input to the correspondence space voting procedure comprises of point pairs that can be considered as candidates for symmetry .prior to performing the transformation space mapping procedure , we sample a set of locally symmetric point pairs from the input shape based on the similarity of their wks s . to estimate the symmetry support received by a point pair , we perform a voting procedure by counting the number of _ good voters _ which potentially share the same intrinsic symmetry as the source point pair .the voting procedure ensures that we have a set of good point pair initializations from which we can create an initial map of the symmetry transformation that can then be extrapolated to other surface point pairs on the 3d shape .[ [ computation - of - the - functional - map ] ] computation of the functional map + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + a functional map provides an elegant representation for the maps between surfaces , allowing for efficient inference and manipulation . in the functional map approach ,the concept of a map is generalized to incorporate correspondences between real valued functions rather than simply between surface points on the 3d shapes .our choice of the multi - scale eigenbasis of the laplace - beltrami operator makes the functional map representation both , very compact and yet informative .we show that our functional map formulation not only results in a compact description of the underlying map of the symmetry transformation , but also enables meaningful characterization of the symmetry transformation .the primary contributions of our paper can be summarized as follows : 1 .robust and meaningful characterization of the symmetry transformation via formulation of a symmetry space to quantitatively distinguish between instances of simple and complex symmetry .leveraging of the functional map representation to succesfully represent the map of the underlying symmetry transformation regardless of its complexity .3 . providing a solution to the symmetric flipping issue via interpolation of the correspondence from the base surface point pairs using functional maps .4 . enabling recovery of the symmetry groups via clustering on the functional mapsthe research literature on symmetry detection has grown substantially in recent years as shown in the excellent survey by mitra et .al . . in this paper, we do not attempt to provide an _ exhaustive _ exposition of the state of the art in symmetry detection ; rather we focus on discussing existing works that are most closely related to our proposed approach .several recent approaches to detect approximate and partial extrinsic symmetries have focused on algorithms that cluster votes for symmetries in a parameterized `` transformation space '' ; ; ; ] .mitra et al . generate `` votes '' in a transformation space to align pairs of similar points and then cluster them in a space spanned by the parameters of the potential symmetry transformations .regardless of how good the shape descriptors are , the aforementioned methods are not effective at finding correspondences between points in complex symmetry orbits that are spread across multiple distinct clusters in the transformation space . since the dimensionality of the transformation space increases with the complexity of the symmetry , the voting procedure in transformation space becomes increasing intractable when dealing with complex symmetries .there exists a body of published research literature that characterizes shape representations based on the extent of symmetry displayed by an object with respect to multiple transformations .kazdhan et al . have proposed an extension of zabrodsky s symmetry distance to characteristic functions , resulting in a _ symmetry descriptor _ that measures the symmetries of an object with respect to all planes and rotations through its center of mass .podolak et al . have extended the symmetry descriptor to define a planar reflective symmetry transform ( prst ) that measures reflectional symmetries with respect to all planes through space .rustamov et al . have extended the prst to consider surface point - pair correlations at multiple radii .although the above representations provide a measure of symmetry for a regularly sampled set of transformations within a group , they are practical only for transformation groups of low dimensionality ( for example , rigid body transformations would require one to store a six - dimensional matrix ) and break down when faced with groups of higher dimensionality .the exists a class of techniques that exploits the redundancy in repeating structures to robustly detect symmetries ; ; ; ; . the transformation space voting method proposed by mitra et al . is extended in by fitting parameters of a transformation generator to optimally register the clusters in transformation space .berner et al . and bokeloh et al . have taken a similar approach using subgraph matching of feature points and feature lines , respectively , to establish potential correspondences between repeated structures .this is followed by an iterative closest points ( icp ) algorithm to simultaneously grow corresponding regions and refine matches over all detected patterns , allowing the detection of repeated patterns even in noisy data , but at the cost of requiring a - priori knowledge of the commutative group expected in the data . also , the non - linear local optimization procedure within the icp algorithm could cause it to get trapped in a local minimum if the initialization is not good enough .lipman et al . have proposed an eigen - analysis technique for symmetry detection that relies on spectral clustering .the top eigenvectors of their geometric similarity - based scm characterize the symmetry - defining orbits , where each orbit includes all points that are symmetric with one another .however , their work is not suited for multi - scale partial symmetry detection .first , expressing local point similarities as symmetry invariants is only appropriate for global intrinsic symmetry detection . in the case of partial symmetry detection, it is not always possible to reliably judge if two surface points are symmetric by comparing only their point ( i.e. , local ) signatures , especially when one point lies on the boundary of symmetric regions .moreover , their single - stage clustering procedure is unable to identify overlapping symmetries .xu et al . have extended the eigen - analysis approach of lipman et al . by incorporating the concept of global intrinsic distance - based symmetry support accompanied by a 2-stage spectral clustering procedure to distinguish between scale detection and symmetry detection .although they showed some interesting results , the 2-stage spectral clustering procedure made their method extremely slow .furthermore , the absence of transformation map retrieval meant that further processing of the detected symmetries , which are represented as point pairs , was extremely inefficient .the proposed scheme is decoupled into two steps of _ correspondence space voting _ and _ transformation space mapping_. the _ correspondence space voting _ technique is inspired from the work of xu et al . , but in our technique , we bypassed the 2 particularly lengthy steps of spectral clustering and the all - pair geodesic distance calculation to improve the running time quite significantly .moreover , our introduction of _ transformation space mapping _ in symmetry detection is quite novel in the sense that this not only provides a concise description of the underlying symmetry transformation , but according to our knowledge this is one of the first works which has the unique ability of characterizing the symmetric transformation .in this section , we present a formal description of the problem being addressed in this paper .in particular , we define the input to and output of the proposed algorithm .we also define the type of intrinsic symmetries the proposed algorithm is designed to detect and the formal characterization of these symmetries .there are two primary aspects to the theoretical framework for the proposed algorithm , i.e. , _ correspondence space voting _ ( csv ) and _ functional map retrieval _( fmr ) . in the case of csv , a joint criterion , that combines local intrinsic surface geometry and global intrinsic distance - based symmetry ,is proposed and shown to result in a provably necessary condition for intrinsic symmetry . in the case of fmr ,a formal scheme for the characterization of the detected symmetries , based on their complexity is proposed .we have mainly restricted our study of intrinsic symmetries to isometric involutions for two reasons ; first , having a provably necessary condition for intrinsic symmetry provides theoretical soundness and second , the proposed symmetry criterion bounds the search space sufficiently , ensuring that the solution is computationally tractable .following the initial detection of good symmetric correspondences , we proceed to retrieve the map of the symmetry transformation by leveraging the functional map framework .moreover , the intrinsic symmetry ensures that the retrieved maps exhibit a diagonality characteristic .a cost matrix is designed to exploit this characteristic such that the inner product of functional map with the cost matrix results in a _ quantitative _ evaluation of the complexity of the detected symmetry .our problem domain is a compact , connected 2-manifold , , with or without a boundary .the manifold is a _3d shape _ ,i.e. , . distances on the manifold are expressed in terms of an intrinsic distance measure . in particular , the biharmonic distance measure ( bdm ) is used on account of its ease of calculation and greater robustness to local surface perturbation when compared to the geodesic distance measure . in the remainder of the paper ,we use the term intrinsic distance and biharmonic distance interchangeably . in the proposed algorithm each symmetry transformationis represented by a map .consequently , the output of the proposed algorithm consists of maps represented as matrices .the output can be regarded as a complete description of all the overlapping intrinsic symmetries represented in a compact and informative manner .suppose we are given a compact manifold without boundary .following we call intrinsically symmetric if there exists a homeomorphism on the manifold that preserves all geodesic distances .that is : where is the intrinsic distance between two points on the manifold . in this case, we call the mapping an intrinsic symmetry .we propose two simple criteria to test whether two surface point pairs on the manifold potentially share the same intrinsic symmetry .specifically , given two surface point pairs and on manifold , the first criterion , which is based on local intrinsic geometry , determines the symmetry potential of the two surface point pairs by comparing their corresponding wave kernel signatures ( wks s ) as follows : where is a scale parameter .the second criterion is based on intrinsic distance as follows : the above two criteria are necessarily satisfied if the surface point pairs under consideration correspond to the same intrinsic symmetry .the proposed framework fails to detect intrinsic symmetry in cases where the second symmetry criterion ( in equation ) is not satified .for example , figure [ fig : failure ] , depicts two human figures that form what may be perceived as a translational symmetry , although they do not possess intrinsic symmetry . the second symmetry criterion ( equation ) fails to hold in this situation . generally speaking ,the proposed algorithm is not designed to detect all forms symmetry resulting from repeated patterns , especially if the patterns are not connected .in contrast to the approach of xu et al . , the use of biharmonic distance instead of geodesic distance as the intrinsic distance measure ensures that the proposed algorithm is capable of detecting intrinsic symmetry even in the presence of small perturbations ( such as small bumpy regions ) on the 3d surface .the first step in the proposed symmetry detection and characterization framework is the correspondence space voting ( csv ) procedure .although a voting procedure has been previously incorporated in an earlier symmetry detection technique , it was carried out primarily in transformation space .the importance of correspondence space for the detection of symmetry was explained more recently by lipman et al . . in our case , although the detected symmetry is finally represented in functional space , it is critical to have good initialization to ensure the success of the final map generation .we have designed and implemented a csv algorithm to facilitate good initial guesses . the csv algorithmcomprises of three stages described in the following subsections : a subset of points with adequate discriminative power needs to be selected prior to the generation of surface point pairs .a subset consisting of sample points is chosen from the surface of the given input 3d shape using the _ farthest point sampling _ strategy .although originally designed to operate on geodesic distances generated using the marching cubes algorithm , in our case we have employed the farthest point random sampling strategy in biharmonic distance space .the results of the sampling procedure are depicted in figure [ fig : sampling ] . the _ biharmonic distance measure _ ( bdm ) is similar in form to the _ diffusion distance measure _ ( ddm ) .the bdm kernel is based on the green s function of the biharmonic differential equation . in the continuous case, the ( squared ) biharmonic distance between two points and can be defined using the eigenvectors ( ) and eigenvalues ( ) of the laplace- beltrami operator as follows : the above definition of the bdm is slightly different from that of the ddm where the denominator in the case of the ddm is .however , this subtle change ensures greater control over the characterization of the global and local properties of the underlying manifold in the case of the bdm .consequently , the bdm is fundamentally different from the ddm with significantly different properties .the bdm , as expressed in equation , captures the rate of decay of the normalized eigenvalues of the laplace- beltrami operator ; if the decay is too slow , it produces a logarithmic singularity along the diagonal of the green s function .alternatively , too fast a decay basically ignores eigenvectors associated with higher frequencies , resulting in the bdm being global in nature ( i.e. , the local surface details are ignored ) .et al . _ , demonstrated that performing quadratic normalization provides a good balance , ensuring that the decay is slow enough to capture the local surface properties around the source point and yet rapid enough to encapsulate global shape information .in particular , lipman _ et al . _ have theoretically proved two important properties of the bdm , i.e. , that it is ( i ) a metric , and ( ii ) smooth everywhere except at the source point where it is continuous .the key observation is that for 3d surfaces , the eigenvalues , , of the laplacian are an increasing function of resulting in the continuity of the bdm everywhere and also smoothness of the bdm everywhere except at the source point , where it has only a derivative discontinuity . in our implementation of the farthest pointrandom sampling strategy , a single point is selected randomly at first and the remaining points are chosen iteratively from remainder set by selecting the farthest point in the biharmonic distance space at each iteration .this strategy generates a set of points located mostly in the vicinity of the shape extrema which can then be used in the subsequent surface point pair generation procedure . from the chosen subset consisting of sample points , the surface point pairsare generated by exploiting the similarity of their local intrinsic geometric structure .the similarity of local intrinsic geometric structure of two surface points is determined by comparing their corresponding wave kernel signatures ( wks s ) . to determine the wks of a surface point ,one evaluates the probability of a quantum particle with a certain energy distribution to be located at point .the behavior of the quantum particle on the surface is governed by the schrodinger equation .assuming that the quantum particle has an initial energy distributed around some nominal energy with a probability density function , the solution of the schrodinger equation can then be expressed in the spectral domain as : aubry et al . considered a family of log - normal energy distributions centered around some mean log energy with variance .this particular choice of distributions is motivated by a perturbation analysis of the laplacian spectrum .having fixed the family of energy distributions , each point on the surface is associated with a wks of the form : where is the probability of measuring a quantum particle with the initial energy distribution at point .aubry et al .use logarithmic sampling to generate the values .the wks can be shown to exhibit a band - pass characteristic .this reduces the influence of low frequencies and allows better separation of frequency bands across the descriptor dimensions . as the result ,the wave kernel descriptor exhibits superior feature localization compared to the heat kernel signature ( hks ) .although the wks is invariant under isometric deformation , in most practical cases where the underlying surface is represented by a discrete triangular mesh , it is not possible to strictly satisfy the invariance criterion in equation .consequently , we have considered the 2-norm ( simply squared eucledian distance in the wks space ) of the wks and chosen surface point pairs from the subset which satisfy the wks invariance criterion in equation to within a prespecified threshold instead of requiring strict equality .the surface point pairs that satisfy the wks invariance criterion in equation are considered for the next step of global distance - based voting .this relaxation of the wks invariance criterion ensures that even the overlapping intrinsic symmetries are considered for voting .we have coined the term _ good voters _ to denote the subset of surface point pairs that satisfy the wks invariance criterion ( to within the prespecified threshold ) .the global distance - based voting step in the proposed symmetry detection technique is inspired by the work of xu et .al . .a subset of symmetric point pairs is extracted from the set of _ good voters _ using the global distance - based voting procedure prior to functional map generation .the goal of the voting procedure is to accumulate symmetry support for the _ good voters _ and extract the symmetric point pairs based on the level of symmetry support received .a point pair in the set of _ good voters _ is deemed to be symmetric if it has a sufficiently large global symmetry support , which in is measured by the number of point pairs that satisfy the intrinsic distance criterion in equation .xu et al . have presented a voting technique based on two point pairs followed by spectral clustering on the symmetric point pairs which enables one to distinguish whether a point pair supports one particular symmetry or more than one type of symmetry .however , since we are are not interested , at this stage , in making the above distinction between the point pairs , we have chosen to adopt a straightforward approach of choosing a point pair and letting the set of _ good voters _ vote and decide whether or not the particular point pair satisfies the intrinsic distance criterion in equation .another modification to the previous voting procedure of xu et al . is our use of the biharmonic distance as the intrinsic distance instead of the geodesic distance .this modification provides two advantages over the previous voting procedure , first , since the computation of geodesic distances between all surface point pairs is more intensive than the computation of biharmonic distances , our procedure is much faster .second , biharmonic distances are less sensitive to noise and surface perturbations compared to geodesic distances , making our procedure more robust .before describing our functional map approach to symmetry extraction , we first provide an overview of the functional map framework proposed by ovsjanikov et al .a functional map is a novel approach for inference and manipulation of maps between shapes that tries to resolve the issues of correspondences in a fundamentally different manner . rather than plotting the corresponding points on the shapes ,the mappings between functions defined on the shapes are considered .this notion of correspondence generalizes the standard point - to - point map since every pointwise correspondence induces a mapping between function spaces , while the opposite , in general , is not true .the new framework described above provides an elegant way to avoid direct representation of correspondences as mappings between shapes using a functional representation .ovsjanikov et al . have noted that when two shapes and are related by a bijective correspondence , then for any real function , one can construct a corresponding function as . in other words ,the correspondence uniquely defines a mapping between the two function spaces , where denotes the space of real functions on . equipping and with harmonic bases , and , respectively, one can represent a function using the set of ( generalized ) fourier coefficients as . translating this representation into the other harmonic basis ,one obtains a simple representation of the correspondence between the shapes given by where are fourier coefficients of the basis functions of expressed in the basis of , defined as .the correspondence between the shapes can thus be approximated using basis functions and encoded using a matrix of these fourier coefficients , referred to as the functional matrix . in this representation, the computation of the shape correspondence is translated into a simpler task of determining the functional matrix from a set of correspondence constraints .the matrix has a diagonal structure if the harmonic bases and are compatible , which is a crucial property for the efficient computation of the correspondence . in our symmetry extraction algorithm , instead of comparing two different shapes , we propose to compare two symmetric regions within the same shape .in particular , based on the previously detected set of symmetric point pairs , we leverage the functional map representation in the following manner : for each pair of symmetric points , we deem one point as the source and the other as the destination and choose a local region around each point .the ordering of the source and destination points within the pair is the same as originally chosen during the voting procedure .the corresponding eigenbases for the points in the source and destination regions are computed .these eigenbases are ordered based on their similarity with each other and the final functional map for that particular symmetry is extracted .the functional map representation ensures that ( a ) the problem of symmetry extraction is tractable and , ( b ) the resulting symmetry can be represented , not by a large matrix of point correspondences , but rather as a more compact map which can be further manipulated for other applications as well .in general , the characterization of a specific transformation based on its functional map is a challenging task . however in our case , the proposed csv framework ensures that the point pairs used in the generation of the functional map are intrinsically symmetric to a reasonable extent .this property of intrinsic symmetry ensures that the resulting functional map is diagonal or close to diagonal .however , in reality , there are several cases where the actual transformation deviates substantially from -isometric deformation resulting in a densely populated functional matrix .the diagonality property of the matrix was first exploited succesfully in an intrinsic correspondence framework using a sparse modeling technique . in their paper , the authors introduced a weight matrix of same size as where lower weights are assigned to elements close to the diagonal , and larger weights are assigned to the farther off - diagonal elements by using an inverted gaussian model where the 0 weights are assigned at diagonal and the weights at off - diagonals are larger .the element - wise multiplication of with is a determining factor in the assessment of the diagonality of since the matrix inner product results in higher values for the off - diagonal elements and lower values for the diagonal elements thereby resulting in a measure of the diagonality of the matrix . andrepresented in the increasing order of the value of inner product of and .,width=302 ] in the context of symmetry characterization , problem , we assume that the off - diagonality of corresponds to the complexity of the symmetry transformation and therefore is penalized during multiplication with elements of the weight matrix , i.e. , more non - zero off - diagonal elements in would represent a more complex symmetry transformation .thus , we can , not only determine the complexity of the symmetry transformation but can also succesfully formulate a 1d semi - metric symmetry space , wherein each symmetry transformation is represented as a point in the symmetry space with a value given by the inner product beween the matrix and weight matrix .the euclidian distance between the points in the 1d symmetry space represents the complexity distance between the transformations . in the symmetry space, any perfectly isometric transformation will result a point at origin of the line . on the other hand, more complex symmteric transformations would result in points farther away signifying more complex transformations .this space is called a semi - metric space because of it follows two main properties of the distance definition but not the third one . for two points and in the symmetry space ,the distance from to always follows 1 .non - negativity : 2 .symmetry : + but not 3 .triangle inequality .it is also possible to cluster the ponts in the 1d symmetry space to identify intrinsic symmetries which are potentially similar in nature as shown in the experimental results section to follow .in this section , we present and discuss the results obtained by the proposed intrinsic symmetry detection algorithm on 3d shapes .we also provide comparisons of our results with those obtained from the most closely related approaches , .we also shown some applications where the detected symmetries can be further analyzed for symmetry characterization and clustering , potentially revealing greater semantic information about the underlying 3d shape .most of the 3d shape models used in our experiments are from the _ non - rigid world _dataset unless mentioned otherwise .several discrete schemes have been proposed in recent years to approximate the laplace - beltrami operator on triangular meshes . among these ,the one mostly used most widely for computing the discrete laplace operator is the cotangent ( cot ) scheme , originally proposed by pinkall and polthier .recently , belkin et al proposed a discrete scheme based on the heat equation , which has been proven to possess the point - wise convergence property for arbitrary meshes .although it is well known that no discrete laplace operator can share all of the properties of its continuous counterpart , in our experiments the aforementioned discrete schemes produce eigenfunctions that approximately preserve the convergence property of the continuous laplace operator for a reasonably well sampled triangular mesh . in all of the experiments described below ,we have used the cotangent scheme for the computation of the discrete laplace - beltrami operator .the results of the proposed symmetry detection algorithm are depicted in figures [ fig : teaser ] , [ fig : overlap ] , [ fig : noise ] and [ fig : clustering ] .several important properties of the proposed algorithm are highlighted in these results .the ability of the proposed algorithm to identify multiple intrinsic symmetries is evident from the results shown in figures [ fig : overlap ] and [ fig : clustering ] .the extracted symmetries are seen to cover the global symmetry of the underlying 3d shape which has undergone approximate isometric deformations .additionally , the proposed algorithm is also observed to be capable of detecting symmetry transformations that cover individual components of a 3d object that possess various forms of self - symmetry .one particularly important aspect of the proposed algorithm is its ability to detect instances of overlapping symmetry .an instance of overlapping symmetry is deemed to occur when a specific region on the surface of the 3d shape is simultaneously subjected to more than one symmetry transformation and , as a result , is symmetric to more than one region on the 3d shape surface .for example , figure [ fig : overlap ] shows that the overlapping symmetry between _ all _ paws of the _ cat _ shape model is succesfully detected by the proposed technique .six different combinations of symmetry transformation between the four paws are depicted in figure [ fig : overlap ] .although the csv procedure is statistical in nature , the proposed biharmonic distance - based voting scheme ensures its robustness to noise .in particular , figure [ fig : noise ] demonstrates the robustness of the proposed symmetry detection technique for different levels of synthetic gaussian noise added to the _ human _ shape model .all of the experiments reported in this paper were performed on an intel core 3.4 ghz machine with 8 gb ram .for all the example models , the number of sample points were in the range [ 20 , 100 ] .table [ tab : timing ] reports the timing results for the various steps in the proposed symmetry detection algorithm .in particular , unlike , wherein the most time consuming step of all - pairs geodesic distance computation is not reported , we also report the timing results for the equivalent step in our formulation i.e. , the all - pairs biharmonic distance computation .the most time consuming step in the proposed algorithm , i.e. , the all - pairs biharmonic distance computation , accounts for around 80% of the execution time of the proposed algorithm .more importantly , bypassing the two - step spectral clustering procedure described in reduces significantly the computation time of the proposed algorithm .[ tab : timing ] we have compared the proposed symmetry detection algorithm primarily with methods that could be deemed sufficiently similar , . the symmetry detection technique of xu et al . can detect overlapping partial intrinsic symmetries whereas that of lipman et al . is designed to deal with partial extrinsic symmetries . however , if the symmetric sub - shapes do not undergo significant pose variations , the global alignment component of may allow it to detect certain partial intrinsic symmetries as well . however , whereas both methods , are capable of detecting instances of partial intrinsic symmetry , neither is able to characterize the underlying symmetry .in contrast , the proposed algorithm , not only detects overlapping intrinsic symmetries , but it also has the ability to characterize and cluster the detected symmetries in symmetry space . since their entire formulation is based solely on global intrinsic distance - based voting , the technique of xu et al . suffers from the shortcoming of not being able to reliably detect symmetry flips .the symmetry flip phenomenon is deemed to occur when for the same symmetric transform , the point correspondences interchange their relative positions due to ill - posed symmetry detection criteria .the proposed algorithm , on the other hand , interpolates the functional map of symmetry transformations from the chosen directed point pair , to the remaining point pairs , i.e. , once the source and destination within a point pair are identified , the remaining correspondences are obtained via interpolation using the functional map .the functional map ensures that the interpolated point pairs observe the same direction of symmetry as the initial directed pair , i.e. , the functional map preserves the relative positions of points within a point pair that are subject to the same symmetry transformation .the functional map thus provides a natural solution to the symmetry flip problem as evidenced in figure [ fig : flip ] .the symmetry - factored embedding ( sfe ) technique of lipman et al . , though designed for extrinsic symmetry detection , in simpler cases , is able to detect intrinsic symmetries as well . however , it fails completely in cases of overlapping symmetries whereas the proposed csv procedure ensures the detection of instances of overlapping intrinsic symmetry as shown in figure [ fig : sfe ] . in this section, we present quantitative evaluation of the results obtained by the proposed approach . in order to evaluate the proposed algorithm , we used the shrec 2010 feature detection and description benchmark .this dataset comprises of three shapes ( null shapes ) and a set of shapes obtained by applying a set of transformations on the null shapes .shapes have approximately 10,000 to 50,000 vertices and their surfaces are represented by triangular meshes .we have specifically considered shapes which have undergone changes characterized by isometry , topology , micro - holes , scale and noise . for each transformation , in the initial phase, a total 50 sample points are generated using the farthest random sampling strategy as shown in figure [ fig : quant ] .almost all transformations maintain a high repeatability ( greater than 80% ) at overlap values except for changes in topology .the diagonality property of the functional map could be used to characterize the underlying symmetry transformation and classify it as a simple transformation or as one that is more complex in nature .our assumption is that , greater the complexity of symmetry transformation , greater the deviation of the shape deformation from intrinsic isometry , resulting in a deformation characterized by a functional matrix with higher off - diagonal element values .the resulting characteriztion of the isometric deformation is depicted in figure [ fig : charac ] . as discussed in , symmetry has a group structure .consequently , the retrieval of symmetry transformation can be cast as a clustering problem from an algorithmic perspective .we exploit the functional maps generated previously to cluster the detected instances of symmetry in transformation space and retrieve the symmetry groups . due to the symmetric flip problem inherent in the csv procedure, we adjusted the relative positions of source and destination points within each point pair by finding the minimum distance between the source and destination points in the biharmonic distance space .this ensures that the functional maps generated from potentially similar transformations will have similar structure since the symmetry flip problem between the maps is resolved .in particular , we have used a simple -means clustering algorithm to cluster the functional maps based on their symmetry groups as depicted in figure [ fig : clustering ] .we have presented an algorithm for detection and characterization of intrinsic symmetry in 3d shapes .while the results obtained are encouraging , we regard our work as an initial attempt towards complete understanding of and a verifiable solution to the general problem of symmetry detection and characterization . we have identified some limitations of our approach and we hope to address these in our future work . to the best of our knowledge ,this is one of the first attempts to formalize the symmetry analysis problem not only as one of symmetry detection , but as one that can be extended to include symmetry characterization and symmetry clustering in the transformation space .in particular , the introduction of the functional map formalism in symmetry detection enables us to come up with a novel representation of the symmetry transformation as a map . in future , we aim to formulate operations , such as addition and subtraction , on these generated maps that would potentially provide a deeper and more comprehensive understanding of intrinsic symmetry in general . the incorporation of _ transformation space mapping _ in symmetry characterization is a completely new idea and the full potential of it can only be realized after more extensive experimentations .in particular , we plan to study the possibility of map based exploration of similar symmetric transformations across shapes in near future .another important direction that can be considered is the possibility of incorporating this technique in computer aided geometric design for urban architecture . in urban architecture ,symmetric repetition of same pattern is a common thing and during the design phase , if the basic structure is stored only once and the symmetric repetitions are saved as _functional maps _, it can possibly solve both the space complexity problem and the design efficiency problem .bronstein , a. m. , bronstein , m. m. , bustos , b. , castellani , u. , crisani , m. , falcidieno , b. , guibas , l. j. , kokkinos , i. , murino , v. , sipiran , i. , ovsjanikov , m. , patane , g. , spagnuolo , m. , and sun , j. ( 2010 ) shrec 2010 : robust feature detection and description benchmark .workshop on 3d object retrieval ( 3dor10)_. bronstein , a. m. , bronstein , m. m. , and kimmel , r. ( 2007 ) calculus of non - rigid surfaces for geometry and texture manipulation ._ ieee trans . visualization and computer graphics _ , vol 13(5 ) , september - october , pp . 902913 .berner , a. , bokeloh , m. , wand , m. , schilling , a. , and seidel , h .-( 2008 ) a graph - based approach to symmetry detection .fifth eurographics / ieee vgtc conf .point - based graphics _ , eurographics association , pp. 18 .li , w. , zhang , a. , and kleeman , l. ( 2005 ) fast global reflectional symmetry detection for robotic grasping and visual tracking ._ proc . australasian conference on robotics and automation ( acra05 ) _ , december .podolak , j. , shilane , p. , golovinskiy , a. , rusinkiewicz , s. , and funkhouser , t. ( 2006 ) a planar - reflective symmetry transform for 3d shapes ._ acm transactions on graphics ( tog ) _25(3 ) , july , pp . 549559 .porkass , j. , bronstein , a. m. , bronstein , m. m. , sprechmann , p. , and sapiro , g. ( 2013 ) sparse modeling of intrinsic correspondences ._ computer graphics forum _ , 32(2.4 ) , blackwell publishing ltd . , pp .459468 .wang , y. , xu , k. , li , j. , zhang , h. , shamir , a. , liu , l. , cheng , z. , and xiong , y. ( 2011 ) symmetry hierarchy of man - made objects ._ computer graphics forum _ , 30(2 ) ,blackwell publishing ltd . , pp .287-296 yen , l. , fouss , f. , decaestecker , c. , francq , p. , and saerens , m. ( 2007 ) graph nodes clustering based on the commute - time kernel .11th pacific - asia conference on knowledge discovery and data mining ( pakdd 2007 ) _ , springer lecture notes in computer science , lncs . | a comprehensive framework for detection and characterization of overlapping intrinsic symmetry over 3d shapes is proposed . to identify prominent symmetric regions which overlap in space and vary in form , the proposed framework is decoupled into a _ correspondence space voting _ procedure followed by a _ transformation space mapping _ procedure . in the correspondence space voting procedure , significant symmetries are first detected by identifying surface point pairs on the input shape that exhibit _ local _ similarity in terms of their intrinsic geometry while simultaneously maintaining an intrinsic distance structure at a _ global _ level . since different point pairs can share a common point , the detected symmetric shape regions can potentially overlap . to this end , a _ global intrinsic distance - based voting _ technique is employed to ensure the inclusion of only those point pairs that exhibit significant symmetry . in the transformation space mapping procedure , the _ functional map _ framework is employed to generate the final map of symmetries between point pairs . the transformation space mapping procedure ensures the retrieval of the underlying dense correspondence map throughout the 3d shape that follows a particular symmetry . additionally , the formulation of a novel cost matrix enables the inner product to succesfully indicate the complexity of the underlying symmetry transformation . the proposed transformation space mapping procedure is shown to result in the formulation of a semi - metric symmetry space where each point in the space represents a specific symmetry transformation and the distance between points represents the complexity between the corresponding transformations . experimental results show that the proposed framework can successfully process complex 3d shapes that possess rich symmetries . |
on numerous occasions during our research in social media , resource sharing , intention analysis , and dissemination patterns , an interesting question emerged : when did a certain resource first appear on the public web ? upon examining a resource , one could find a publishing timestamp indicating when this resource was created or first made available to the public . for those select few pages ,the timestamp format varies largely along with the time granularity .some forum posts could deliver solely the month and the year of publishing , while in other news sites one can extract the timestamp to the second .time zones could be problematic too : if not clearly stated on the page , the time zone could be that of the webserver , crawler / archive , or gmt .ideally , each resource should be accompanied by a creation date timestamp but this is not true in most cases .a second resort would be to ask the hosting web server to return the last modified http response header .unfortunately , a large number of servers deliberately return more current last modified dates to persuade the search engine crawlers to continuously crawl the hosted pages .this renders the dates obtained from the resource or its server highly unreliable . in our prior work ,some of the social media resources we were investigating , ceased to exist .we needed to investigate the time line of this resource from creation , to sharing , to deletion . depending on the hosting server to provide historic information abouta missing resource is unachievable in most cases .this places a limitation to services that attempt to parse the resource textual representation or even its uri looking for timestamps. the following step would be to search the public archives for the first existence of the resource . as we show belowthat using this method solely has significant limitations .thus there is a need for a tool that can estimate the creation date of any resource investigated without relying on the infrastructure of the hosting web server or the state of the resource itself .some pages are associated with apis or tools to extract its metadata , but unfortunately they are non - unified , extremely specific , and what works on one page would not necessarily work on the other . due to the speed of web content creation and the ease of publishing ,a certain assumption could be established . in some cases , like in blogs ,a page could be created and edited before it is published to the public . to facilitate our analysis, we will assume that the creation and publishing of a resource coincide .if the creation date of the resource is unattainable , then the timestamp of its publishing or release could suffice as a fairly accurate estimate of the creation date of the resource . asfire leaves traces of smoke and ashes , web resources leave traces in references , likes , and backlinks . the events associated with creating those shares , links , likes , and interaction with the uri could act as an estimate as well . if we have access to these events , the timestamp of the first event could act as a sufficient estimate of the resource s creation date . in this paper , we investigate using those traces on the web to estimate the creation date of the published resource .finally , we propose an implementation to this tool based on our analysis to be utilized by researchers .the problem of estimating the age of web resources has been visited before , but from a different angle .jatowt et al .investigated the age of web content posted in dynamic pages .they utilized a multiple binary search algorithm to extract the first time the content of a certain dom component of the page started to appear within the page in the archives .they analyzed multiple versions of the web page provided by the public archives .after breaking down the page to multiple dom components , the archived versions were explored using binary search for the first existence of each of these components .the timestamp of this first appearance is recorded indicating an estimate for when the enclosed web content , within each component , was created .this approach , relies on the archiving coverage of the web provided by the public archives , and the temporal difference between when the content s creation date and the time it was crawled and archived .this period of time could range from a few hours in heavily archived pages , up to more than a year in other cases . to access and analyze the public archives we utilized the memento framework which facilitated the navigation between the current and the past web .we investigated web archival coverage while estimating how much of the web is archived . in our experiment , we sampled the web forming four different data sets extracted from four sources .we found that the amount of the web currently archived ( or having at least one accessible past version in the public web archives ) is highly correlated to where the web resource resides .accordingly , the percentage of coverage ranges from 16% to 79%. this would be the case for long standing resources that exist on the web at the time of archiving .our recent study , investigating resources related to multiple historical events since 2009 , showed that the published resources are at continuous risk of disappearance and within the first year of publishing about 11% disappear .this is important if the resource whose age we wish to estimate existed on the web only briefly .this disappearance event might occur prior to the first archival crawl , resulting in complete unattainability of the resource .an investigation on the web resource itself mining for timestamps in the published content was conducted by inoue and tajima .they analyzed web pages for timestamp embedded by content management systems ( cms ) .this approach supports the most popular date formats but could suffer from ambiguity due to dates the mix in the month versus day order in the uk format versus in the us one .the authors applied different techniques in attempts to solve this ambiguity .as accurate the results of this approach could be , it still remains specific to cmss and highly reliant on the content itself , reducing its generality .we propose analyzing different other sources and services to mine for the first appearance of the resource .these services vary in reliability and the results they provide which demanded that we conduct an evaluation of each of the services we used and investigate the amount of accuracy lost upon the failure of each service .it is worth noting that mccown and nelson conducted an experiment to gauge the difference between what some services like google search might provide from both their api versus the web interface .they found a significant difference in the results from both sources .similarly , klein conducted a study analyzing the results from using the delicious.com api vs. screen scraping the web interface .he proved that screen scraping provided better results than utilizing the api , which we considered in our analysis .there are three reasons we can not use just the web archives to estimate the creation date .first , not all pages are archived .second , there is often a considerable delay between when the page first appeared and when the page was crawled and archived .third , web archives often quarantine the release of their holdings until after a certain amount of time has passed ( sometimes 612 months ) .these three major deficiencies discourage the use of the web archives solely in estimating an accurate creation date timestamp for web resources . in the following sections , we investigate several other sources that explore different areas to uncover the traces of the web resources . utilizing the best of a range of methodssince we can not rely on one method alone , we build a module that gathers this information and provides a collectively estimation of the creation date of the resource .figure [ fig : timeline ] illustrates the methodology of the age estimation process with respect to the timeline of the resource .prior to investigating any of the web traces we return back to the basics , to the resource itself .we send a request for headers to the hosting web server and parse the output .we search for the existence of last modified date response header and parse the timestamp associated if it exists .we use the ` curl ` command to request the headers as shown in figure [ fig:0 ] .we also note the timestamp obtained from the headers can have errors as demonstrated in a study of the quality of etags and last - modified datestamps by clausen . * ` curl -i http://ws-dl.blogspot.com/2012/02/2012-02-11-losing-my-revolution-year.html ` * -5 mm .... http/1.1 200 ok content - type : text / html ; charset = utf-8 expires : sat , 02 mar 2013 04:04:09 gmt date : sat , 02 mar 2013 04:04:09 gmt cache - control : private , max - age=0 last - modified : we d , 27 feb 2013 17:27:20 gmt etag : " 473ba56b - fd4a-4778-b721 - 3eabdd34154e " x - content - type - options : nosniff x - xss - protection : 1 ; mode = block content - length : 0 server : gse .... typically , we think of backlinks as discoverable through search engines . in the next sections we explore the different forms of backlinks and how we can utilize them in our investigation .firstly , a backlink refers to the link created on a page _ a _ referring to the intended page _b_. page _ a _ is considered a backlink of _ b_. if page _ a _ is static and never changed this means that it was created at point in time following the creation of _ b _ , could be by minutes or years . if page _ a _ was change - prone and had several versions , the first appearance of the link to page _ b _ on _ a _ could trigger the same event indicating that that it happened also at a point in time following the creation of _b_. if we can search the different versions of a throughout time we can estimate this backlink timestamp . to accomplish this , we utilized google api in extracting the backlinks of the uri . note that google api is known to under - report backlinks as shown by mccown and nelson . to explore the multiple versions of each of the backlinks we utilize the memento framework in accessing the multiple public archives available each backlink we extract its corresponding timemaps .we use binary search to discover in the time maps the first appearance of the link to the investigated resource in the backlink pages . using binary searchensures the speedy performance of this section of the age estimating module . with the backlink having the most archived snapshots ( cnn.com > 23,000 mementos ) , the process took less than 15 iterations accessing the web archives .the minimal of the first appearance timestamps from all the backlinks is selected as the estimated backlink creation date .similarly , this date can act as a good estimation to the creation date of the resource .twitter enables users to associate a link with their tweeted text , technically creating a backlink to the shared resource .when a user creates a web resource and publicizes it on their social network , by tweeting a link to it or posting it on their facebook account , they create backlinks to their resource .typically , these backlinks are not accessible via a search engine .the more popular the user and the more the resource gets retweeted or shared , the more backlinks the original resource gains increasing its rank and discoverability in search engines .to elaborate , we examine the following scenario .a resource has been created at time , as shown in fig [ fig:1 ] and shortly after a social media post , or a tweet , has been published referring to the resource at time as shown in fig [ fig:2 ] .this new time , could act as a fairly close estimate to the creation date of the post with a tolerable margin of error of minutes in some cases between the original and .given this scenario , tweets inherently are published with a creation / posting date which makes it easier to extract . the task remaining is to find the tweets that were published with the targeted resource embedded in the text with incorporating all the shortened versions of the uri as well .twitter s timeline search facility and its api both provide results of a maximum of 9 days from the current day .accordingly , we utilize another service , topsy.com , that enables the user to search for a certain uri and get the latest tweets that incorporated it and the influential users sharing it .topsy s otter api provides up to 500 of the most recent tweets published embedded a link to the resource and the total number of tweets ever published . except for the highly popular resources ,the 500 tweets limit is often sufficient for most resources .the tweets are collected and the corresponding posting timestamps are extracted .the minimum of these timestamps acts as an indication of the first time the resource was tweeted .this timestamp in turn signifies the intended mentioned earlier .another form backlinks could take is uri shortnening .currently , there are hundreds of services that enables the user to create a short uri that references another longer uri and acts as an alias to it for easier dissemination on the web .shortened uris could be used for the purposes of customizing the uri or for monitoring the resource by logging the amount of times the short uri have been dereferenced or clicked . some services , like bitly ,can provide the users with a lookup capability for long uris .when a uri is shortened for the first time by a non logged - in user , it creates an aggregate public short uri that is public to everyone .when other unauthenticated users attempt to shorten the same uri it provides the original first aggregated short uri .for every logged - in user , the service provides the possibility to create another personal shortened uri . for our purposes we lookup the aggregated short uri indicating the first time the resource s uri have been shortened by this service and from that we query the service once more for the short uri creation timestamp .bitly has been used as the official automatic shortener for years by twitter before they replaced it with their own shortener . similarly to the previous backlinks method we mine bitly for those creation timestamps and use them as an estimate of the creation date of the resource , assuming the author shortens and shares the resource s uri shortly after publishing it .the most straightforward approach used in the age estimation module is the web archives analysis .we utilize the memento framework to obtain the timemap of the resource and from which we obtain the memento datetime for each and then extract the least one indicating the first memento captured .note that memento datetime is the time of capture at the web archive and is not equivalent to last modified or creation dates . in some cases, the original headers in some mementos include the original last modified dates , but all of them have the memento date time fields .we extract each of those fields , parse the corresponding dates , and pick the lowest of which .an extra date range filter was added to avoid dates prior to 1995 , before the internet archive began archiving , or more than the current timestamp .the final approach is to investigate the search engines and extract the last crawled date . except for the highly active and dynamic web pages ,the resources get crawled once and get marked as such to prevent unnecessary re - crawling .news sites article pages , blogs , and videos are the most encountered examples of this .the idea is to use the search engines apis to extract this last crawled date and utilize it as an estimate of the creation date .this approach is effective due to the relatively short period of time between publishing a resource and its discovery by search engine crawlers .we use google s search api and modify it to show the results from the last 15 years accompanied by the first crawl date .unfortunately this approach does not give time granularity ( hh : mm : ss ) , just dates ( yyyy : mm : dd ) .to validate an implementation of the methods described above , we collect a gold standard dataset from different sources which we can extract the real publishing timestamps .this could be done by parsing feeds , parsing web templates , and other methods . in the next sections we illustrate each of the sourcesutilized and explain the extraction process .two important factors were crucial in the data collection process : the quality of the timestamps extracted , and the variety of the sources to reduce any bias in the experiment .thus , we divide data into four categories .table [ tab:1 ] summarizes the four categories .each article is associated with a timestamp in a known template that can be parsed and extracted .the articles are also usually easily accessible through rss and atom feeds or xml sitemaps . for each of the news sites under investigation we extracted as many resources as possible then randomly downsized the sample . to increase the variety of the gold standard dataset we investigate five different social media sources .these selected sources are highly popular , and it is possible to extract accurate publishing timestamps . as those sources are tightly coupled with the degree of popularity and to avoid the bias resulting from this popularity we randomly extract as many resources as possible from the indexes , feeds , and sitemaps and do not rely solely on the most famous blogs or most shared tumblr posts .furthermore , we randomly and uniformly sample each collection to reduce its size for our experiment . so as not to limit our gold standard dataset to low level articles , blogs , or posts only , we incorporated top level , long - standing domains . to extract a list of those domains we mined alexa.com for the list of the top 500 sites .this list of sites was in turn investigated for the dns registry dates using one of the dns lookup tools available online . a final set of 100 was randomly selected from the resolved sites and added to the gold standard dataset .finally , we randomly select a set of 100 uris that we can visually identify the timestamp somewhere on the page itself .these uris were selected empirically using random walks on the web .the 10 uris analyzed in included within these 100 uris as well .the corresponding true value of the creation timestamp for each of the 10 uris is the one provided in their analysis .the collected dataset of 1,200 data points is tested against the developed implementation of the carbon dating methods and the results are recorded .since the data points are collected from different sources , the granularity varies in some cases , as well as the corresponding time zones . to be consistent , each real creation date timestamp is transformed from the corresponding extracted timestamp to coordinated universal time ( utc ) and the granularity for all the timestamped have been set to be a day .each data point has a real creation date in the iso 8601 date format without the time portion ( e.g. , yyyy : mm : dd ) .similarly , the extracted estimations were processed in the same manner and recorded . for each method ,we record the estimated timestamp and the temporal delta between the estimated timestamp and the actual one as shown in equation [ eq:1 ] .collectively , we calculate the best estimated timestamp as in equation [ eq:2 ] , the closest delta between all the methods and the real timestamp as shown in equation [ eq:3 ] , and the method that provided this best estimate . table [ tab : auc ] shows the outcomes of the experiment .the numbers indicate how many times a resource provided the closest timestamp to the real one .it also shows that for 290 resources , the module failed to provide a single creation date estimate ( 24.90% ) .as our age estimation module relies on other services to function ( e.g. , bitly , topsy , google , web archives ) ; the next step is to measure the effect of each of the six different age estimation methods and to gauge the consequences resulting in failure to obtain results from each . for each resourcewe get the resulting best estimation and calculate the distance between it and the real creation date .we set the granularity of the delta to be in days to match the real dates in the gold standard dataset .to elaborate , if the resource was created on a certain date and the estimation module returned a timestamp on the same day we declare a match and in this case = 0 . to measure the accuracy of estimation, 393 resources out of 1200 ( 32.78% ) returned = 0 indicating a perfect estimation .for all the resources , we sort the resulting deltas and plot them .we calculate the area under the curve using the composite trapezoidal rule and the composite simpon s rule with x - axis spacing of 0.0001 units .we take the average of both approximations to represent the area under the curve ( auc ) .semantically , this area signifies the error resulting from the estimation process .ideally , if the module produced a perfect match to the real dates , auc = 0 .table [ tab : auc ] shows that the auc using the best lowest estimate of all the six methods is 762.64 .disabling each method one by one and measuring the auc indicates the resultant error corresponding to the absence of the disabled method accordingly .the table shows that using or disabling the use of backlinks barely affected the results .disabling the bitly services or the google search index query affected the results slightly ( 0.51% and 2.64% respectively ) . while disabling any of the public archives query , or the social backlinks in topsy and the extraction of the last modified date if exists hugely affects the results increasing the error tremendously .we utilized polynomial fitting functions to fit the values corresponding to the age estimations corresponding to each uri .figure [ fig : fitted ] shows the polynomial curve of the second degree used in fitting the real creation times stamps of the gold standard dataset .figure [ allfitted ] shows the fitted curve resulting from removing each of the methods one by one .each of the curves signifies an estimate of the best the other methods could provide .the further the estimated curve is from the real one the less accurate this estimation would be .+ + * ` curl -i http://cd.cs.odu.edu/cd/http://www.mementoweb.org ` * -5 mm .... http/1.0 200 ok date : fri , 01 mar 2013 04:44:47 gmt server : wsgiserver/0.1 python/2.6.5 content - length : 550 content - type : application / json ; charset = utf-8 { " uri " : " http://www.mementoweb.org " , " estimated creation date " : " 2009 - 09 - 30t11:58:25 " , " last modified " : " 2012 - 04 - 20t21:52:07 " , " bitly " : " 2011 - 03 - 24t10:44:12 " , " topsy.com " : " 2009 - 11 - 09t20:53:20 " , " backlinks " : " 2011 - 01 - 16t21:42:12 " , " google.com " : " 2009 - 11 - 16 " , " archives " : { " earliest " : " 2009 - 09 - 30t11:58:25 " , " by archive " : { " wayback.archive-it.org " : " 2009 - 09 - 30t11:58:25 " , " api.wayback.archive.org " : " 2009 - 09 - 30t11:58:25 " , " webarchive.nationalarchives.gov.uk " : " 2010 - 04 - 02t00:00:00 " } } } ....after validating the accuracy of the developed module the next step was to openly provide age estimation as a web service .to fulfill this goal , we created `` * * _ carbon date _ * * '' , a web based age estimation api . to use the api , simply concatenate the uri of the desired resource to the following path : + _ http://cd.cs.odu.edu/cd/ _ and the resulting json object would be similar to the one illustrated in figure [ fig : json ] .estimating the age of web resources is essential for many areas of research .previous research investigated the use of public archives as a point of reference to when the content of a certain page appeared . in this study , we investigated several other possibilities in estimating the accurate age of a resource including social backlinks ( social posts and shortened uris ) , search engine backlinks , search engine last crawl date , the resource last modifed date , the first appearance of the link to the resource in its backlinks sites , and the archival first crawl timestamp .we also incorporated the minimum of the original headers last modified date , and the memento - datetime http response header .all of these methods combined , where we select the oldest resulting timestamp , proved to provide an accurate estimation to the creation date upon evaluating it against a gold standard dataset of 1200 web pages of known publishing / posting dates .we succeeded in obtaining an estimated creation date to 910 resources out of the 1200 in the dataset ( 75.90% ) .40% of the closest estimated dates were obtained from google , topsy came in second with 26% , followed by the public archives , bitly , and last modified header with 17% , 11% , and 6% respectively . usingthe backlinks yielded only 3 closest creation dates proving its insignificance .we also simulate the failure of each of the six services one at a time and calculated the resulting loss in accuracy .we show that the social media existence ( topsy ) , the archival existence ( archives ) and the last modified date if it exists , are the strongest contributers to the age estimation module respectively .this work was supported in part by the library of congress and nsf iis-1009392 .s. g. ainsworth , a. alsum , h. salaheldeen , m. c. weigle , and m. l. nelson .how much of the web is archived ? in _ proceeding of the 11th annual international acm / ieee joint conference on digital libraries _, jcdl 11 , 2011 .d. antoniades , i. polakis , g. kontaxis , e. athanasopoulos , s. ioannidis , e. p. markatos , and t. karagiannis .we.b : the web of short urls . in _ proceedings of the 20th international conference on world wide web _ , www 11 , pages 715724 , new york , ny , usa , 2011 .m. inoue and k. tajima .noise robust detection of the emergence and spread of topics on the web . in _ proceedings of the 2nd temporal web analytics workshop _ , tempweb 12 , pages 916 , new york , ny , usa , 2012 .a. jatowt , y. kawai , and k. tanaka . detecting age of page content . in _ proceedings of the 9th annual acm international workshop on web information and data management _ , widm 07 , pages 137144 , new york , ny , usa , 2007 .acm .h. m. salaheldeen and m. l. nelson .losing my revolution : how many resources shared on social media have been lost ? in _ proceedings of the second international conference on theory and practice of digital libraries _ , tpdl12 , pages 125137 , berlin , heidelberg , 2012 .springer - verlag . | in the course of web research it is often necessary to estimate the creation datetime for web resources ( in the general case , this value can only be estimated ) . while it is feasible to manually establish likely datetime values for small numbers of resources , this becomes infeasible if the collection is large . we present `` carbon date '' , a simple web application that estimates the creation date for a uri by polling a number of sources of evidence and returning a machine - readable structure with their respective values . to establish a likely datetime , we poll bitly for the first time someone shortened the uri , topsy for the first time someone tweeted the uri , a memento aggregator for the first time it appeared in a public web archive , google s time of last crawl , and the last - modified http response header of the resource itself . we also examine the backlinks of the uri as reported by google and apply the same techniques for the resources that link to the uri . we evaluated our tool on a gold standard data set of 1200 uris in which the creation date was manually verified . we were able to estimate a creation date for 75.90% of the resources , with 32.78% having the correct value . given the different nature of the uris , the union of the various methods produces the best results . while the google last crawl date and topsy account for nearly 66% of the closest answers , eliminating the web archives or last - modified from the results produces the largest overall negative impact on the results . the carbon date application is available for download or use via a web api . = 10000 = 10000 |
the national basketball association ( nba ) launched a public database in september 2013 containing over 80 new statistics captured by stats llc through their innovative sportvu player tracking camera systems .the cameras capture and record the location of players on the court as well as the location of the ball , and the data are used to derive many different interesting and useful stats that expand greatly upon the traditional stats available for analysis of basketball performance .we can now break down shot attempts and points by shot selection ( e.g. driving shots , catch and shoot shots , pull up shots ) , assess rebounding ability for contested and uncontested boards , and even look at completely new statistics like average speed and distance and opponent field goal percentage at the rim .the availability of such data enables fans and analysts to dig into the data and uncover insights previously not possible due to the limited nature of the data at hand .+ for example , techniques to uncover different positions based on grouping statistical profiles have become increasingly popular .it reflects the mindset of current nba coaches and general managers who are very much aware of the different types of players beyond the five traditional roles , but a recent proposal has received criticism for its unintuitive groupings and inability to separate out the impact of player talent .the nba player tracking data has the ability to differentiate player performance across more dimensions than before ( e.g. shot selection , possession time , physical activity , etc . ) which can provide better ways to evaluate the uniqueness and similarities across nba player abilities and playing styles .additionally , many research methods for basketball analysis rely on estimation of possessions and other stats to produce offensive and defensive ratings . with the ability to track players time of possession and proximity to players in possession of the ball through player tracking, we can develop more accurate representations of possessions and better player offensive and defensive efficiency metrics .+ however , the high dimensionality of this new data source can be troublesome as it demands more computational resources and reduces the ability to easily analyze and interpret findings .we must find a way to reduce the dimensionality of the data set while retaining the ability to differentiate and compare player performance .one method that is particularly well - suited for this application is principal component analysis ( pca ) which identifies the dimensions of the data containing the maximum variance in the data set .this article applies pca to the nba player tracking data to discover four principal components that account for 68% of the variability in the data for the 2013 - 2014 regular season .these components are explored in detail by examining the player tracking statistics that influence them the most and where players and teams fall along these new dimensions .+ in addition to exploring player and team performances through the principal components , a simple measure of similarity in statistical profiles between players and teams based on the principal components is proposed .the statistical diversity index ( sdi ) can be calculated for any pairwise player combination and provides a fast and intuitive method for finding players with similar statistical performances along any or all of the principal component dimensions .this approach is also advantageous from the standpoint of scalability .the possibilities to derive new statistics from the player tracking data are endless , so as new statistics emerge , this approach can again be applied using the new and existing data to reconstruct the principal components and sdi for improved player evaluation and comparisons .+ numerous applications in personnel management exist for the use of sdi and the principal component scores in evaluating and comparing player and team statistical performances .two specific case studies are presented to show how these tools can be used to quickly identify players with similar statistical profiles to a certain player of interest for the purpose of identifying less expensive , similarly skilled players or finding suitable replacement options for a key role player within the organization . +this article is organized as follows .section [ sec : data ] describes the player tracking data and data processing .section [ sec : pca ] provides the analysis and interpretations for the four principal components in detail , showing how players and teams can be compared across these new dimensions .section [ sec : sdi ] introduces the calculation for sdi and two case studies where principal component scores and sdi are used to find players with similar statistical profiles to all - star tony parker and role player anthony morrow for personnel management purposes .section [ sec : conclusion ] concludes the article with final remarks .currently , there are over 90 new player tracking statistics , and data for all 482 nba players from the 2013 - 2014 regular season are available .separate records exist for players who played for different teams throughout the season , including a record for overall performance across all teams .brief descriptions of the newly available statistics adapted from the nba player tracking statistics website are provided for reference and will be helpful in better understanding the analysis going forward .+ * shooting * + traditional shooting statistics are now available for different shot types : * _ pull up shots _ - shots taken 10 feet away from the basket where player takes 1 or more dribbles prior to shooting * _ driving shots _ - shots taken where player starts 20 or more feet away from the basket and dribbles less than 10 feet away from the basket prior to shooting * _ catch and shoot shots _ - shots taken at least 10 feet away from the basket where player possessed the ball for less than 2 seconds and took no dribbles prior to shooting * assists * + new assist categories are available that enhance understanding of offensive contribution : * _ assist opportunities _ - passes by a player to another player who attempts a shot and if made would be an assist * _ secondary assists _ - passes by a player to another player who receives an assist * _ free throw assists _ - passes by a player to another player who was fouled , missed the shot if shooting , and made at least one free throw * _ points created by assists _ - points created by a player through his assists * touches * + location of possessions provides insight into style of play and scoring efficiency : * _ front court touches _ - touches on his team s offensive half of the court * _ close touches _ - touches that originate within 12 feet of the basket excluding drives * _ elbow touches _ - touches that originate within 5 feet of the edge of the lane and the free throw line inside the 3-point line * _ points per touch _ - points scored by a player per touch * rebounding * + new rebounding statistics incorporate location and proximity of opponents : * _ contested rebound _ - rebounds where an opponent is within 3.5 feet of the rebound * _ rebounding opportunity _ - when a player is within 3.5 feet of a rebound * _ rebounding percentage _ - rebounds over rebounding opportunities * rim protection * + when a player is within 5 feet of the basket and within 5 feet of the offensive shooter , opponents shooting statistics are available to measure how well a player can protect the basket . *speed and distance * + players average speeds and distances traveled per game are also captured and broken out by offensive and defensive plays. only players who played at least half the 2013 - 2014 regular season , 41 games , are included in this analysis .this restriction is made to reduce the influence of player statistics derived from only a few games played .also fields containing season total statistics and per game statistics are dropped from the analysis since they could be influenced by number of games and minutes played throughout the season . instead , per 48 minutes , per touch , and per shot statistics are used .the final data set contains 360 player records each containing 66 different player tracking statistics .with numerous player tracking statistics already available and the potential to develop infinitely many more , it is increasingly difficult to extract meaningful and intuitive insights on player comparisons .now that the data are available for more granular and detailed comparisons , a methodology is needed that can analyze the entirety of the data set to extract a handful of dimensions for comparisons .these dimensions should be constructed in a way that ensures optimality in differentiating players ( i.e. dimensions should retain the maximum amount of player separability possible from the original data ) and can be understood in terms of the original statistics .+ principal component analysis ( pca ) , developed by karl pearson in 1901 and later by hotelling in 1933 , is a particularly well - suited statistical tool that can accomplish this task through identifying uncorrelated linear combinations of player tracking statistics that contain maximum variance .interested readers can find a brief technical introduction to pca in appendix [ app : pcaintro ] .components of high variance help us to better differentiate player performance in these directions in hopes that the majority of the variance will be contained in a small subset of components .this simple and intuitive approach to dimension reduction provides a platform for player comparisons across dimensions that best separate players by statistical performance and can be implemented without expensive proprietary solutions , providing more visibility into how the method works at little to no additional cost .pca is sensitive to different variable scalings in the original data set such that variables with larger variances may dominate the principal components if not adjusted . to set every statistic on equal footing, all statistics are standardized with mean 0 and variance 1 prior to conducting the analysis .+ pca is most useful when the majority of the total variance across variables are captured by only a few of the principal components , thus the dimension reduction .if this is the case for the nba player tracking data set , it means that we are able to retain the ability to differentiate player performance without having to operate in such a high dimensional space .figure [ fig : scree ] shows the variance captured by each principal component .note the variance captured in the first principal component is very high and decreases drastically through the first four components .after that , the change in variance is relatively flat , forming an elbow shape in the plot .this means that the variances captured by the fifth component onward are very similar and much smaller than the first four components .moreover , the first four components capture 68% of the variance across the original variables , so we utilize these four components going forward to analyze and compare player performance and playing styles .each principal component ( pc ) is a linear combination of the original variables in the dataset .there is a vector for each principal component containing the coefficients associated with each of the variables in the original data set and are called _ loading vectors_. these describe the influence of each variable for each principal component and are used to interpret these new dimensions in terms of the original variables .figure [ fig : pcload ] plots the categorized loading coefficients for the four principal components and is explored in detail in the following sections .variables can have a positive or negative contribution to the principal component . while the sign is arbitrary , understanding which variables contribute positively or negatively can help with interpreting the principal components .the most important statistics for each component are presented in the following sections , but tables containing all loading coefficients for variables contributing significantly to the principal components are also available in appendix [ app : loadings ] for more details .+ each player can then be given a set of pc scores by multiplying the standardized statistics by their corresponding loading coefficients and then taking a sum ( see appendix [ app : pcaintro ] for details ) .figure [ fig : pcscore ] contains plots of the pc scores for all players with a select few noted for illustration . using the loading and score plots here, we can begin to understand and interpret what these new dimensions are capturing and use them for player comparisons .the first principal component accounts for the most variation , 42% of the total variance .table [ tab : pc1top ] lists statistics with highly positive and negative loadings for pc 1 , and refer back to figure [ fig : pcload ] for categorized pc 1 loadings for all statistics .these are used to better understand the meaning of the scores along this dimension .players with positive scores for pc 1 are able to secure rebounds of all kinds and are responsible for defending the rim and close shots .notable examples are andre drummond , deandre jordan , and omer asik . while players with negative scores for pc 1 drive the basketball to the hoop and take pull up and catch and shoot shots often , which implies they tend to be outside players .these players also tend to possess the ball more often and generate additional offense through assists .examples here are stephen curry , tony parker , and chris paul .pc 2 accounts for another 12% of the total variance and table [ tab : pc2top ] lists statistics with highly positive and negative loadings . also refer to figure [ fig : pcload ] for categorized pc 2 loading coefficients .players with positive pc 2 scores generate offense mainly through assists and driving shots .these players tend to possess the ball often and either kick the ball to teammates for shot attempts or drive the ball to the basket .many point guards fall into this category with examples like ricky rubio , tony parker , and chris paul .players with negative pc 2 scores provide offense primarily through catch and shoot shots and are very efficient scorers , especially from behind the 3-point arc .primary examples are klay thompson , kyle korver , and anthony morrow .table [ tab : pc3top ] lists statistics with highly positive and negative loadings for pc 3 which explains 9% of the total variance .also refer to figure [ fig : pcload ] for categorized pc 3 loading coefficients .players with positive pc 3 scores are extremely quick on both sides of the ball and cover a lot of ground while on the court .some examples are ish smith , shane larkin , and dennis schroder .players with negative pc 3 scores are largely responsible for scoring when on the court and provide a significant amount of offensive production per 48 minutes .scoring and rebounding efficiency characterize many of the superstars in the nba with players like kevin durant , carmelo anthony , and lebron james touting highly negative pc 3 scores .table [ tab : pc4top ] lists statistics with highly positive and negative loadings for pc 4 which accounts for another 4% of the total variance .also refer to figure [ fig : pcload ] for categorized loading coefficients for pc 4 .this component is characterized by players tendencies when they receive possession of the ball .players with positive pc 4 scores tend to pass or convert catch and shoot shots when the ball goes their way ( e.g. kevin love , spencer hawes , and patty mills ) while players with negative pc 4 scores tend to drive the ball and score efficiently when they get touches ( e.g. tyreke evans , rodney stuckey , and tony wroten ) . not only can we characterize players by principal components , but teams can also be profiled along these new dimensions as well .there are numerous ways to aggregate the player pc scores to form a team - level score , but here a simple weighted average is used .the team pc score , , can be found by taking an average of the pc scores across the players weighted by the minutes played throughout the season , .figure [ fig : pcscoreteam ] shows the distribution of all nba teams across these dimensions as well as their corresponding 2013 - 2014 regular season winning percentage .this view is useful in seeing the differences and similarities in team playing styles and how they impact success .+ + for example , the new york knicks had an extremely negative pc 2 score .further investigation shows it is partially the result of catch and shoot offense from j.r .smith , andrea bargnani , and tim hardaway jr . who were all in the top 50 in catch and shoot points per 48 minuteshowever , another major factor is that 8 of the 12 new york players were below average in passes per 48 minutes ( average was 58 passes per 48 minutes ) which is indicative of poor ball movement .all - star carmelo anthony is in this group and has long been labeled a `` ball hog'' which is supported by his below average passing and above average number of touches and scoring .in fact , anthony s top 10 performance in points per 48 minutes , points per touch , and rebounding efficiency helped earn his team the most negative pc 3 team average score .note that team average pc 3 score is negatively correlated with winning percentage , yet the knicks won only 37 games and failed to make the playoffs . to better understand how these team average pc scores impact winning , table [ tab : reg ] contains the results from a multiple linear regression analysis on winning percentage .note that negative pc 3 scores are highly correlated with winning while positive pc 2 and pc 4 scores are also highly correlated with winning .negative pc 3 scores are associated with high average scoring and rebounding efficiency .referring back to tables [ tab : pc2top ] and [ tab : pc4top ] , passes , touches , and assists contribute positively to pc 2 and pc 4 scores .regarding new york s lackluster season , it seems that anthony s great offensive production was not enough to offset the negative impact of extremely poor passing and ball movement . | the release of nba player tracking data greatly enhances the granularity and dimensionality of basketball statistics used to evaluate and compare player performance . however , the high dimensionality of this new data source can be troublesome as it demands more computational resources and reduces the ability to easily analyze and interpret findings . therefore , we must find a way to reduce the dimensionality of the data set while retaining the ability to differentiate and compare player performance . in this paper , principal component analysis ( pca ) is used to identify four principal components that account for 68% of the variation in player tracking data from the 2013 - 2014 regular season and intuitive interpretations of these new dimensions are developed by examining the statistics that influence them the most . in this new high variance , low dimensional space , you can easily compare statistical profiles across any or all of the principal component dimensions to evaluate characteristics that make certain players and teams similar or unique . a simple measure of similarity between two player or team statistical profiles based on the four principal component scores is also constructed . the statistical diversity index ( sdi ) allows for quick and intuitive comparisons using the entirety of the player tracking data . as new statistics emerge , this framework is scalable as it can incorporate existing and new data sources by reconstructing the principal component dimensions and sdi for improved comparisons . using principal component scores and sdi , several use cases are presented for improved personnel management . team principal component scores are used to quickly profile and evaluate team performances across the nba and specifically to understand how new york s lack of ball movement negatively impacted success despite high average scoring efficiency as a team . sdi is used to identify players across the nba with the most similar statistical performances to specific players . all - star tony parker and shooting specialist anthony morrow are used as two examples and presented with in - depth comparisons to similar players using principal component scores and player tracking statistics . this approach can be used in salary negotiations , free agency acquisitions and trades , role player replacement , and more . * keywords : * principal component analysis , nba player tracking data , statistical diversity index , dimension reduction , personnel management , national basketball association |
correlated percolation is a useful theoretic model in statistical physics .it provides us with fundamental understanding of spread processes of message , disease and matter in nature and society . linking probability between any two nodes in it takes the form of , where is -dimensional distance between the nodes , and is a positive real number , namely , distance - decay exponent of links .weinrib and halperin analytically studied whether the correlations change the percolation behavior or not .weinrib pointed out , for , the correlations are relevant if , where is the percolation - length exponent for uncorrelated percolation ; while for the correlations are relevant if .it is a generalization of the harris criterion appears earlier .recently , network models referring to correlated percolation have gradually appeared .achlioptas process(ap) for link - adding networks , which is an attractive topic at present , could be viewed as a new kind of correlated percolation if we put all nodes uniformly on a two - dimensional(2d ) plane .starting from a set of isolated nodes , two candidate links are put to nodes randomly at every time step , but only the link with smaller product is retained , where (or ) is the mass ( the number of nodes ) of the cluster that node i(or j ) belongs to , which is called product rule(pr ) . a link chosen with pr from both inter - cluster candidates is called a type - i link .when two candidate - links are of different types , i.e. , one is an inter - cluster link , the other is a intra - cluster one , always the later is retained , and it is called a type - ii link . while for both intra - cluster ones , the retained link is arbitrarily chosen no matter they are in the same or different clusters , and it is called a type - iii link . generally speaking , in the way of ap , network percolationis inhibited , which postpones the appearance of the threshold at which a giant component g starts to grow , and results in a sharp growth of g called an explosive percolation . in our point of view , if we put ap on a 2d plane , it gives rise to a new mechanism of long - range correlation for the nodes based on co - evolutionary growing masses of components they connected .the selective rule for topological links relies on mass - product instead of 2d geometric length of them , which prevents the property exhibited in the previous correlated percolation . andcorrelation feature in ap - type of percolation has not been revealed up till now . a recent model of ours based on the observation of phenomena in different real systems describes another kind of correlated percolation in growing networks , which could be viewed as a overlapping of traditional correlated percolation with ap in a 2d space .the link - occupation function in the model takes the form which looks like newton s gravity rule .it resumes the classical erdos - renyi(er ) random graph model when exponent , and it gives another extreme of ap when .different properties of such kind of new correlated percolation are expected , since er random graph grows without any bias , ap takes a strong bias to inhibit network percolation independent of geometric distance , while the new model with gravity - like rule have some distance - related relax on such bias , which produces a new type of correlated percolation . in this paper , we report the simulation results on objective competition between type - i , ii and iii links in both gravity model and ap model , and we point out a new mechanism to support critical points , which bears the scaling relations revealed in our recent work .different saturation effect is manifested , which distinguishes it from the traditional correlated percolations .suppose isolated nodes are uniformly scattered on a 2d plane . for convenience of calculating distance ,the plane is discretized with a triangular lattice , each minimal edge with the length of two units for the convenience of algorithm .each vertex of the triangles is occupied by a node so that we exclude all possible biases except link - adding rules . for any two pairs of nodes and possibly with the same product , a type - i link connects the the pair with longer distance if both the links ends hit the nodes belonging to different clusters ; while a type - ii link connects the nodes inside the same cluster if the other one is an inter - cluster link ; a type - iii one connects arbitrarily chosen pair of nodes if both candidate links are intra - cluster ones .parallel to pr , we pick randomly two pairs [ and of nodes in the plane at every time step . for the pair ( and for likewise ) , we calculate the generalized gravity defined by , where is the geometric distance between and , and is an adjustable decay exponent .once we have and , we have two choices in selecting which pair to connect . for the case of the maximum gravity strategy ( we call it ) we connect the pair with the larger value of the gravity , e.g. , the link is made if and the link otherwise .we also use the minimum gravity strategy ( ) in which we favor the smaller gravity pair to make connection .the two strategies , and , lead the link - adding networks to evolve along the opposite paths of percolation processes . generally speaking , facilitates the percolation process , whereas inhibits it .all such generalized gravity values are calculated inside the circular transmission range with the radius centered at one of nodes and as the speaking node in a mobile ad hoc network . for the different limits of parameters and , we have three cases in the model .case i : with the transmission range , we have a generalized gravitation rule which is an extension of widely used gravitation model with the tunable decaying exponent .case ii : with the exponent , we assume that node pairs can be linked with pr topologically inside the transmission range with a limited radius .case iii : with both limited values of radius and exponent , we have the gravity rule inside the transmission range .it can describe the communication or traffics with constrained power or resources . for casei and case iii in the model , three scaling relations have been found with large scale simulations .when strategy is adopted in 2d free space(case i ) , we have where is dimensionless time - step with , is the decay exponent of connection probability . , , and is a universal function . when strategy is adopted inside the transmission range with radius (case iii ), we have \ ] ] for certain parameter ranges of and , where , , , , , and is a universal function .in addition , when strategy for case iii is adopted inside transmission range defined by , we have another scaling relation for , where , , , , and is a universal function . to understand three scaling relations above , we should look into the mechanism of the evolution processes underlying . to see what happens in such critical points , and what are particular of them in certain link - adding processes , we count the temporal link fractions , and calculate the average lengths of links which is defined as the summation of all lengths of links for a certain type over its number in a window time - steps . by observation of the time - dependent behaviors of fractions of type - i , ii and iii links ,new properties were found out for our gravity - like model together with ap producing explosive percolations .all simulations are carried out on the triangular lattice of the size with and , respectively .we simulate either of strategy or for either case i or iii .the total number of links equating to that of time - steps is divided by , which is defined as .the mass of the largest component divided by makes up the observable , the node fraction of the largest component .all results presented in this work are obtained from 5000 different realizations of network configurations with if not specially indicated . inspired by cho and kahng s work and a referee of ref., we have gone further by calculating fractions of three types of links and arithmetic average lengths of their links .our attention was pointed at ap first . in fig.1we illustrated the evolution of fractions of 3 types of links .just at the threshold the fraction of type - i links has a sharp drop - down , meanwhile that of type - ii shoots up , crossing at . a little after it , crosses with growing fraction of type - iii links ( ) at the level , while gets its summit( ) at the same point , which has not been concerned by previous works . however , it is this property that pervades all cases in the present correlated percolation . in fig.2 , the average lengths of type - ii( )merges that of type - iii( ) at after an abrupt growth , starts to grow earlier than .the level of for both of them keep invariant for , while starts to decrease from .we see from both the figures that in explosive percolation the system undergoes a sharp transition from a type - i link dominant phase into a type - ii and iii dominant phase at .besides , average lengths undergo a parallel transition at the same point .actually , 3 levels of go to infinity in dynamic limit from finite size scaling transformation(not shown ) . and probability decay exponent in case i ( ).,width=336 ] of type - i , type - ii and type - iii links with and probability decay exponent in case i.,width=336 ] of the largest component with strategy and the same parameters in fig.3.,width=336 ]now we turn to the fraction of 3 types of links in case i of the present gravity model . with strategy in free 2d space , we have scaling relation ( 1 ) for distance - decay exponent ] collapse into the universal function very well , with that for barely collapsing onto it .but those for and do not behave well in collapse .the separation from others at the turning middle part indicates the deviation of their from with which others share . in the description of the average lengths for case i with , simulated results in fig.4 with all values demonstrate the same steady level ( for ) .variation of parameter only shifts starting points of up - growing and as increases .the saturation effect of large decay exponents appears clearly and is shown by dash lines , which demonstrates the inheritance from traditional correlated percolation . in this case with , special level of fractions at cross point of and keeps 0.25 just as in ap without any distance - decay included , so does at hiking its summit . of the system with in case i. and .(b)percolation thresholds for and with in case i.,width=336 ] of type - i , type - ii and type - iii links with strategy in case i. parameters are the same as in fig.6,width=336 ] as in a usual way , we determine the critical point of percolation by observation of tips of susceptibility ( fig.6 ) . comparing in fig.6 with in fig.4, we find that these approximately hit the horizontal coordinates of middle point of growing fraction of type - ii links , which means that is the transition point from the inter - cluster - link dominant phase to the intra - cluster - link dominant phase .besides , we have another ( sub)-critical point which is in certain range independent of decay exponent in gravity model , and indicates the balance between the fractions of type - i and type - ii links , yielding a new scaling behavior of in formula ( 1 ) not revealed by previous works .moreover , the steady level is always , type - independent which takes the inherited value of that in ap . actually , 131.5 is the value for l=128 only .we have size effect since a free boundary condition instead of a periodic one is adopted .the finite size effect is shown in fig.7 which gives that , i.e. , where is the number of nodes on the 2d plane .hopefully it goes towards infinity in thermodynamic limit .however , the finite size effect of ( fig.8 for an example ) is not strong enough for us to identify the scaling exponents as usual .therefore , we can check the validation of scaling laws presented by weinrib for correlated percolation in the present model only by rescaling susceptibility . fig . 9illustrates the results of it for examples and , respectively .we have scaling relation of the largest component with strategy in case i. ( a) ; ( b) . and .,width=336 ] in case i for ( a ) ; ( b ) . and .,width=336 ] where , , and is a universal function . with these values of scaling exponents , the scaling law is not applicable to the present model , where is correlation - length exponent in the long - range case .with power - law form for the correlation function g(r ) , weinrib had derived the extended harris criterion : the long - range nature of the correlations is relevant if , which means the correlations change the percolation critical behavior .it has been violated since now they all behave differently from traditional short range percolation in a 2d triangular lattice ( ) and the correlations are relevant no matter is less(fig.9(a ) ) or larger(fig.9b ) than zero .this is because in strategy we have overlapped the power - law correlation function g(r ) with ap which is another kind of autocorrelation process with positive feed back effect of mass - growing .and probability decay exponent and in case i. .,width=336 ] and probability decay exponent and in case i. .,width=336 ] for the strategy which prefers smaller gravity , it tends to retain a longer link under the comparison of the same product of masses .that is to say , long range links have predominance .evolution of and links do not cross at any common point . in fig.10 ,the cross points for and shift leftward from of ap as decay exponent increases , which means that as a correlated percolation mechanism weaken the explosive effect caused by ap .but the starting position of fraction - ii still provides hints of thresholds .however , it is hard to locate a common cross point for and in a range of .therefore , we have no scaling relation for them .the steady average lengths of links ( fig.11 ) have the same level of and ap cases , and size - effect ( not shown ) also tells the divergence of , but all of them do not tell any possible hint for critical points. generally speaking , strategy facilitates longer links for certain geometric distribution of clusters or nodes . in the evolution ,strategy emphasis the assignments for different types of links , encourage longer and intra - cluster links .humps above the steady and implies out - of - pace growing of link lengths of .that is , surpasses the growing speed of the giant component . here, it is the geometric distance - dependent strategy that makes alleviate effect of ap . with smaller ( e.g. , a=0.2 in fig.11)the strategy has the opportunity to exhaust long links before ; while with middle values of ( e.g. , and ) it may take longer time to exhaust them . however , with too large ( e.g. , ), fall off quicker than the natural dimension , we can only see the ap - type short - range effect of saturation . in this limit ,i.e. , , percolations are no longer relevant , which causes saturation of curves in fig.1b .however , we should not expect the short - range percolation exponent of correlation length here for 2d triangular lattice , since ap has been included in the strategy . and probability decay exponent and in case iii ..,width=336 ] and probability decay exponent and in case iii ..,width=336 ] with strategy , and in case iii ..,width=336 ] with strategy , and in case iii ..,width=336 ] and probability decay exponent and in case iii ..,width=336 ] the distinct feature in case iii for both and is that the candidate links are selected not only by comparing gravities , but also constrained inside a transmission range ( in geometric distance ) , which ruins the effect comes from the divergence of average link lengths .it is well known that all possible singularities at critical points come from the singularity of correlation length .however , here no length could goes to infinity in any way , which ruins possible common cross point relies on the balance between and as in the 2d free space ( case i of the present model)and induces the possibility to yield novel scaling relation other than any previous ones . for possible critical point , we seek help from the evolution of link fractions of 3 types .fig.12 shows the behaviors of , and with for all simulated distance - decay exponents and .correspondingly , fig.13 shows the behaviors of , and the the same set of parameters .critical point ( to be precise ) distinguishes itself from others by intuitive observation . the cross point for and go rightward from which others share , which means they could not share the same hence the same scaling relation with other decay exponents . besides , in rescaling process for , curves for and failed in collapse , because smaller transmission range inhibits the effect of slower(long - range ) decay for connection probability .the rescaled function for is shown in fig.14 .it seems that we could go further with to include more exponents in the scaling as shown in fig.15 .however , it is meaningless in physics due to above mentioned reasons . actually , scaling behaviors are r - dependent , but exponents and need not to vary .the variation of only shifts as illustrated in fig.16 for . therefore , we keep , for all values of with , but take for , for , and so on .the humps above the steady level of and (all independent of exponents ) come from similar mechanism as in free 2d space but now at much lower level constrained by transmission radius , and they are independent of size of the system . and probability decay exponent for and in case iii . .500 realizations of network configurations.,width=336 ] and probability decay exponent for and in case iii.,width=336 ] and probability decay exponent for and in case iii .500 realizations of network configurations.,width=336 ] and probability decay exponent for and in case iii.,width=336 ] with strategy , for and in case iii. .500 realizations of network configurations.,width=336 ] with strategy , for and in case iii.,width=336 ] scaling relation ( 3 ) for case iii with strategy inside transmission range with radius is checked for various decay exponents and for different sizes ( l=32 , 64 , 128 and 256 ) .its validation is independent of size simulated . in fig.17 and fig.18 ,the evolution of , and for both and behave much similarly . and cross at the level a little bit lower than , while crosses at the level , which keeps the same as in all previous cases .the changes of exponent and only shift fractions along horizontal direction of figures , i.e. , to change starting points and growing / dropping speed instead of levels of them .however , keeps as their common fixed point for and to cross . in fig.19 and fig.20 , and for both and at the same level hitting , which distinguishes this point from totally 3 cross points , and makes up a candidate of critical point for scaling relations .the scaling exponents and have been checked for lower values of parameter ( , fig.21 ). however , for , we have to choose a new set of exponents : and (fig.22 ) .it is not strange that steady levels of keep unchanged for certain , independent of or , just as that with in case iii . inside a circle defined by can not go to infinity under any circumstance , but still support a critical point , which distinguishes correlated percolation in case iii from case i and traditional models .it deserves further investigation .in this paper , we have proposed a new network model of correlated percolation in which geometric distance - dependent power - law decay connection probability overlaps achlioptas process to form a gravity model .it can be tuned to facilitate or inhibit percolation with strategy or , cover a wide range of thresholds , yield a set of new scaling relations . and it provides a scheme for better description of practical processes in complex systems . we have developed a new approach to find out candidate critical points with physical meanings other than that of traditional ones .there are objective competition and balance between type - i and type - ii , type - i and type - iii links , meanwhile , competition of average lengths between type - ii and type - iii links . along this line threshold found to overlap the balance point between factions and in the explosive percolation of achlioptas process , and the steady average lengths of three types of links are all divergent to infinity in thermodynamic limit .the percolation is indeed a transition from type - i link dominant phase to type - ii and type - iii dominant phase . by observing evolutions of fractions of type - i , type - ii and type - iii links ,a candidate critical point can be chosen combined with the message on evolutions of average lengths of them . with strategy in 2d triangular lattice ,fraction get balance with , makes up a critical point which supports scaling relation ( 1 ) in case i of the model . with strategy inside certain transmission range with radius , a duet balance exists for and meanwhile and , makes up another critical point which supports scaling relation ( 2 ) in case iii . with strategy and certain range of decay exponent ,again a duet balance exists for and meanwhile and , makes up another critical point which supports scaling relation ( 3 ) for a mini - scale of in case iii .this approach serves an assistant tool in seeking critical points of order parameter which is usually not easy to determine in an intuitive way . in numerical calculations , besides percolation threshold , two fixed points , and emerge as distinct points not only for special temporal crux but also for unchanged levels of , and inherited from ap , which is expected to be further proved in analytical ways .however , they have different physical meanings .the former corresponds to a divergent average length of links , while the later corresponds to confined average lengths by transmission range , which distinguishes itself from traditional critical points in percolations .correlated percolations are relevant since long - range correlation drastically changes the critical properties .the validation ranges of decay exponents with various strategies in different cases define the relevance of correlation .they have demonstrated novel scaling relations different from traditional 2d short - range percolation in triangular lattice .the intervention of distance - dependent power - law decay ingredients alleviates the explosive effect of percolation transition by horizontal adjustment of evolutions along temporal axis , separates from , while the overlapped ap included in the present gravity model always conquers the vertical levels of three fractions and average lengths , which are found neither in traditional correlated percolations of continuities in 2d space nor in complex networks .moreover , the node fraction of the largest component , fractions , and average lengths of three types of links all show saturation phenomena as pointed out by weinrib but with different values of exponent of since now ap overlaps in the present gravity model . and scaling law of weinribis no longer obeyed according to the evidence of numerical results of average - length exponents .we are indebt to anonymous referees for stimulating comments .zhu thanks h. park , p. holm , x .- s .chen and z .- m .gu for useful discussion .we acknowledge financial support from national natural science foundation of china ( nnsfc ) under grants no .11175086 , 10775071 and 10635040 .kim was supported by the national research foundation of korea ( nrf ) funded by the korea government ( mest ) under grant no .2011 - 0015731 . c. e. perkins , _ ad hoc networking_. ( addison - wesley , new york , 2000 ) . l. wang _et al_. , phys .e * 78 * , 066107 ( 2008 ) .j. tinbergen , twentieth century fund , new york ( 1962 ) ; p. poyhonen , weltwirtschaftliches archiv , 93 ( 1963 ) ; j. e. anderson , the american economic review , 69 , 106(1979 ) .j. h. bergstrand , the review of economics and statistics.1985 .a. v. deardorff , the regionalization of the world economy , chicago univ . press , chicage , usa ( 1998 ) .e. helpman , j. eco . lit ., 44 , 589(2006 ) . | motivated by the importance of geometric information in real systems , a new model for long - range correlated percolation in link - adding networks is proposed with the connecting probability decaying with a power - law of the distance on the two - dimensional(2d ) plane . by overlapping it with achlioptas process , it serves as a gravity model which can be tuned to facilitate or inhibit the network percolation in a generic view , cover a broad range of thresholds . moreover , it yields a set of new scaling relations . in the present work , we develop an approach to determine critical points for them by simulating the temporal evolutions of type - i , type - ii and type - iii links(chosen from both inter - cluster links , an intra - cluster link compared with an inter - cluster one , and both intra - cluster ones , respectively ) and corresponding average lengths . numerical results have revealed objective competition between fractions , average lengths of three types of links , verified the balance happened at critical points . the variation of decay exponents or transmission radius always shifts the temporal pace of the evolution , while the steady average lengths and the fractions of links always keep unchanged just as the values in achlioptas process . strategy with maximum gravity can keep steady average length , while that with minimum one can surpass it . without the confinement of transmission range , in thermodynamic limit , while does not when with it . however , both mechanisms support critical points . in two - dimensional free space , the relevance of correlated percolation in link - adding process is verified by validation of new scaling relations with various exponent , which violates the scaling law of weinrib s . |
modern theoretical studies of the influence of dissipation on the propagation of sound on the basis of the navier - stokes equations may be said to have begun with the work of kirchhoff .a principal aim of that and subsequent studies is to determine how the propagation speed and the rate of dissipation of the waves depend on their frequencies . for this problem, the predictions from the standard navier - stokes equations of fluid dynamics do not agree well with experiments when the periods of the sound waves become as short as the mean flight times of the particles of the gas , that is , when we enter the ultrasound regime .there are two directions from which to enter that regime .we can begin with a gas of freely steaming particles and introduce weak interactions among them . in that case, we may with uhlenbeck ask , `` how is it possible to impose on the random motion of the molecules the ordered motion ... which a sound wave represents ? '' in the modern language of dynamical systems theory , this could be seen as a problem of synchronization in which we witness increasing numbers of particles going into cooperative motion until all are engulfed . on the other hand , we may start from the case of continuum mechanics and attempt to extend the validity of that description to the case of longer and longer mean free paths .it is unlikely that in either case we can successfully traverse the full range of possible conditions , but we may expect to encounter an interesting transition between the two regimes . in this paper, we examine how well the fluid dynamical description of paper i of this series extends into the domain where the particle mean free paths are comparable to the characteristic macroscopic length scale of the medium . in paperi , we derived an extension of the fluid dynamical equations that we hope may offer an improvement of this kind and , in the present paper , we study their linear form and the resultant dispersion relation for sound waves . in this first section ,we restate the equations given in i before going on to the straightforward determination of the dispersion relation they imply for the linear theory of sound waves .the basic form of the macroscopic equations derived from kinetic theory , are where is mass density , is the velocity field , is the temperature , is the pressure tensor , is the heat flux vector and the colon stands for a double dot product .we have not included an external force .these equations express newton s laws of motion for a continuum in phenomenological theories and they are a formal consequence of most kinetic theories . where approaches to the derivation of these equations from kinetic theory may differ is in the expressions for the higher moments , and .the derivations from kinetic theory are important since they provide formulas for the transport coefficients that appear in the specific expressions for the pressure tensor and the heat flux . however , not all treatments of the kinetic theory give the same explicit formulas for and , there being differences of degree and style of the approximations used . of course , when the mean free path of the constituent particles is sufficiently shortcompared to all macroscopic lengths in the problem , there is no real disagreement , since the standard navier - stokes forms work well enough for most purposes .but when the macroscopic lengths become short and are comparable to the mean free paths of the particles , those standard results do not agree with experiment , as we shall see .therefore we must ask whether there is a continuum approximation that may provide improved treatments of such problems . to test whether the expressions for and derived in paper i from the relaxation model of kinetic theory may fulfill this need , we here apply them to study of the propagation of ultrasound . in the relaxation model , the relaxation time , , may be taken to be of order of the mean flight time of particles , where the mean speed is of the order of the speed of sound . then , we have where is a constant that depends on the collision cross section and the gas constant and we have ignored a possible dependence of the particle cross - section time on particle speed .the results in i are based on an expansion in , up to first order .those expansions led to a pressure tensor , \ , \mathbb{i } - \mu \ , \mathbb{e } \label{x39}\ ] ] where is the gas constant , is the viscosity and .the result for , together with ( [ tau ] ) , implies maxwell s conclusion that viscosity does not depend on density for simple gases . for the heat flux, we obtained where is the conductivity .both ( [ x39 ] ) and ( [ x40 ] ) carry errors of order that are not indicated explicitly .these formulae for and are not expressed explicitly in terms of the fluid fields .rather , their expressions involve some of the same time derivatives of these fields that appear in the fluid equations . here is the central difference between our results and those obtained in the chapman - enskog approach . in the latter, partial derivatives with respect to time are eliminated by the use of lower order results .though we do not use that elimination procedure in deriving the closure relations , we can nevertheless readily recover the navier - stokes results from ours , when , as we described in i. however , even though both theories formally have first - order accuracy in , the results from them are significantly different at knudsen numbers of order unity , as we shall see in what follows .we consider the evolution of perturbations on a uniform medium and define perturbation variables and through the relations where , and are the constant background values of the thermodynamic fields .the perturbation quantities all have small amplitudes with , for example , . from the linearization of ( [ press ] ) we obtainwe further assume that there is no background motion so that is small and needs no subscripts . in analogy with ( [ 1.1 ] )we write for the pressure tensor , the linearized form where where is evaluated for the state variables of the background medium and is small . similarly ,since there is no zeroth order heat flux , we get for the linearized heat flux , the quantities and are the viscosity and conductivity evaluated in terms of the state variables for the constant background medium .the linearization of ( [ cont ] ) is a compact form of the linearized ( [ mom ] ) is when we take the divergence of ( [ lmom ] ) and use ( [ lcont ] ) , find that from ( [ ip1 ] ) we find that we see from ( [ e ] ) that then , with the help of ( [ lcont ] ) , we find on using ( [ lcont ] ) again , we find that which we may introduce into ( [ 2mom ] ) .next we define the laplacian speed of sound , , and the kinematic viscosity , , as in we then obtain the dissipative wave equation to complete this discussion , it is useful to introduce the thermal diffusivity where . thus , .this is used in the linearized heat equation , where we may write since , we then find where is the prandtl number of the undisturbed medium .finally , to further simplify the appearance of these formulae , we let the unit of time be and the unit of length be .then our linearized equations for sound waves are for comparison we note that the analogous linear equations for the navier - stokes case ( with zero bulk viscosity ) are these : may seek solutions to the linear equations ( [ 1])-([2 ] ) in which and vary like . since the mean free path is the unit of length , the wave number , which is nondimensional , is effectively the knudsen number for this problem .the dispersion relation is for comparison , we report that the dispersion relation for the navier - stokes equations is to get a feeling for what these results mean , we look at free modes for which is real. then we set where and are also real .when we introduce this into ( [ disp ] ) we find that there is a ( thermal ) mode with and a pair of ( acoustic ) modes whose frequencies satisfy which gives the frequencies of sound waves .as we may confirm , is of order unity for large and it grows in proportion to for small .hence , for both very large and very small , the last term on the right of ( [ freq ] ) is the largest one on that side .so we may write the uniform approximation for small , this gives the phase speed , which is the usual speed of sound for an adiabatic sound wave , as is to be expected for very long wave lengths .for large , we obtain the phase speed . for the n - s equations with zero bulk viscositythere is the same number of modes : a thermal mode with zero frequency and sound waves with as expected , the two sets of equations agree in the limit of very small , where the n - s equations return the phase speed .but for large , the differences between the two theories become qualitative . with the navier - stokes equations, we find that at large , instead of reaching a finite limit , the phase speed is proportional to for large .as we shall see when we look at the experimental results , the n - s prediction is qualitatively wrong ; the phase speed goes to a finite value at large .the equation for the damping rate is \alpha + ( { 10\over \sigma}-1)k^2 \omega^2 - { 5\over \sigma } k^4 = 0 . \label{damp}\ ] ] for the thermal mode , for which , we find the damping rates for small and for large .thus , there is very little damping for long waves while short waves are damped on the collisional time scale .moreover , on examination of these two limits , we see that they each emerge from the balance of the same two terms in ( [ damp ] ) .hence we may write the approximate formula as a reasonable approximation to the damping rate for all , in the thermal mode .similarly , in the case of sound waves , we see that is also the result of the balance between the same two terms in ( [ damp ] ) in the limits of for large and small .hence , we find that for sound waves , to good approximation , the damping rate is given by where is given in ( [ freaq ] ) .for long wave lengths , the damping is again slight since it goes to zero like ^2 ] , so that we get the same wave number dependence , but with a different coefficient than is obtained from our equations in the small limit .however , for increasing , the n - s damping rates _grow _ like for sound waves , which is in disagreement with experiment .though the study of free modes in the previous section is intuitively clear , it does not directly represent the way experiments on sound propagation are usually carried out . in the experiments , it is more typical that one drives the fluid at a real , fixed frequency and then studies the propagation of waves in space .the forcing may be accomplished by vibrating the end wall of a tube containing gas at a fixed ( real ) frequency and observing the propagation down the tube . to model this procedure in full detailwould involve a careful treatment of the forcing procedure , which usually requires attention to boundary conditions .however , in this first reconnaissance of the way our equations describe sound waves , we shall adopt a standard theoretical practice and simply fix the wave frequency , , in the dispersion relation and compute the resulting , which will typically be complex .thus , in ( [ disp ] ) we let and we find that the equation for becomes ^2 + 3i\omega^3 = 0 .\label{spatial}\ ] ] we may similarly obtain such an equation for the n - s case , ( [ dispa ] ) . in order to emphasize the results for large knudsen number we plot the results in the manner used , for example , by cercignani .that is , we introduce the quantity where is a normalized inverse propagation speed .the factor is included so that the phase speed is nondimensionalized on the laplacian ( or adiabatic ) speed of sound , rather than the newtonian ( or isothermal ) speed of sound as above. then we find that ^2 + { 5 \sigma i\over 3\omega } = 0 .\label{k4}\ ] ] to see how this representation contains the results for free modes , we note that , in the limit , ( [ k4 ] ) reduces to .that is , for low frequencies , the usual adiabatic sound speed is recovered . in the ( more interesting ) opposite limit , , we obtain for the propagative modes .thus we see that , in the limit of forcing at high frequency , sound waves propagate with phase speeds , independently of frequency .the data shown in the accompanying figure ( fig . [ thefig ] ) confirm this independence of frequency ( or wave number ) of the speed of propagation of ultrasound . the observed nondimensional phase speed is .the prandtl number found from the relaxation model of kinetic theory , either by the methods of chapman and enskog or those described in paper i , is unity . with this value ,we obtain for the limiting phase speed , so this represents a small quantitative error . however , the value of found in kinetic theory depends on the atomic model used , that is , on the nature of the collision term .though the relaxation model gives the explicit value unity for , the value found with the traditional boltzmann equation for hard spheres is .this difference has nothing to do with the approximation method ( our procedure gives when applied to the boltzmann equation ) but is a consequence of the nature of the form of the atomic interactions that is adopted .we therefore follow a common practice put the empirical prandtl number into the theoretical results when comparing with experiments .since the experimental data we shall refer to are for noble gases whose values of are or we shall here adopt the value suggested by the boltzmann equation .when we use that value of the prandtl number in evaluating the phase speed , we obtain . even without this adjustment ,the results for the propagation of ultrasound are good , but we would propose to anyone thinking of using our equations from this first - order development from the relaxation equation to introduce this phenomenological improvement of the theory . in the accompanying figure ( fig .[ thefig ] ) , we show the variation of as a function of from a number of sources .the experimental values ( ) are indicated as individual points ( the diamonds ) and they appear to be tending toward a nonzero constant value at high frequency .this is qualitatively in accord with our results , here shown as a solid line for the case of , and it is in stark disagreement with the prediction from the navier - stokes equations ( long dashes with double dots ) , which predict that goes to zero like . since our limiting value for found to be , the remarkable agreement of our results with experiment owes something to our using the experimental value of for for limiting value .nevertheless , even without this choice , the results would be adequate and comparable to those shown for the moment method ( ) with moments ( short dashes ) .other theoretical studies of ultrasound are based on direct solution of the boltzmann equation and we show the results of sirovich and thurber obtained in this way ( medium dashes ) , for which the prandtl number automatically has the value .in the study of the thermal damping of sound waves by electromagnetic radiation , one finds that , for thermal times much less than the acoustic period , sound propagates at the isothermal speed of sound with negligible dissipation . in the opposite limit of long thermal times , there is also little dissipation , but propagation is at the adiabatic speed of sound .the experiments , and the solution of the boltzmann equation show similar behavior when the relevant parameter is the ratio of the collisional relaxation time to the acoustic period .our equations , as well as those of the moment method ( with tens of thousands of moments ) , reproduce this behavior but the navier - stokes equations do not .moreover , when the prandtl number is chosen to be that of the experimental gas , the quantitative agreement becomes very good . in the next installation of this series , we shall compute the profile of a stationary shock wave . as we shall see , the agreement with the experiments is good in that case too . | the equations of fluid dynamics developed in paper i are applied to the study of the propagation of ultrasound waves . there is good agreement between the predicted propagation speed and experimental results for a wide range of knudsen numbers . |
in recent years different methods have been proposed to map the structure and underlying dynamics of a given time series into an associated graph representation , with the aims of exploiting the modern tools of network science in the traditional task of time series analysis , thereby building a bridge between the two fields .+ in this context , visibility graphs have been proposed as a tool to extract a graph from the relative positions of an ordered series , from which several graph features can be extracted and used for description and classification problems .very recently we have advanced the concept of sequential visibility graph motifs , building on the idea of network motifs to explore the decomposition of visibility graphs into sequentially restricted subgraphs .these motifs induce a graph - theoretical symbolization of a given time series into a sequence of subgraphs .we have shown that the marginal distribution of the motif sequence -the so - called motif profile- is an informative feature to describe different types of dynamics and is useful in the task of classifying empirical time series . for large classes of dynamical systems , we were able to develop a theory to analytically compute the frequency of each motif when these are extracted from a so - called horizontal visibility graph ( hvg ) , this being a modified and simpler version of the original ( natural ) visibility graph ( vg ) which has often shown analytical tractability .as a matter of fact , in the case of vgs to obtain analytical insight has shown to be a challenging task , and besides few exception most of the works that make use of this statistic are computational . in this paperwe bridge this gap and advance a theory to analytically compute the complete motif profile in the natural case ( vg motifs ) .we focus on motifs of size as this was shown to be the simplest case which gives nontrivial results .we validate this theory by deriving explicit motif profiles for several classes of dynamics which we show to be in good agreement with numerical simulations .we also study the robustness of this feature when the time series is short and polluted with measurement noise , and compare its performance with the case of hvg motifs .+ the rest of the paper is as follows : after recalling the definitions of natural and horizontal visibility graphs , in section ii we present the concept and main properties of sequential visibility graph motifs , as well as recalling the theoretical framework where the motif profile from the horizontal version was derived . in section iiiwe focus on natural visibility and develop the theory to compute analytically the motif profile associated to processes where the dynamics are either bounded or unbounded .we test this theory by assessing the predictions for different dynamical systems , and we also show that white noise with different marginals can be distinguished using the natural version instead of the horizontal one . in sectioniv we show that the visibility graph motif profile is a robust feature in the sense of ( i ) having a fast convergence to asymptotic values for short series size and ( ii ) being robust against contamination with measurement noise ( white and colored ) . in section v we conclude .let be a real - valued time series of data .the natural visibility graph ( vg ) extracted from the series is the graph where each datum in the series is associated to a node ( thus and is a totally ordered set ) and an edge between node and node exists if ] is defined as the set of all the possible sub - graphs with consecutive vertices along the hamiltonian path of a vg ( similarly , the set of hvg motifs of size is the set of all the admissible sub - graphs with consecutive vertices along the hamiltonian path of a hvg ) . accordingly ,sequential vg motifs are also visibility graphs . for , there are in principle a total of possible motifs ( see table [ tab:1 ] for an enumeration ) , although as we will show below the number of admissible ones is just 6 . given a vg , its sequential motifs can be detected using a sliding window of size which slides along the hamiltonian path of the graph with consecutive overlapping steps . at each step a particular motifis detected inside the window .we can accordingly estimate , the frequency of appearance of a certain motif , and define the _n - motif profile _ .the process of extracting a vg / hvg and its sequential visibility motif set is illustrated in figure [ fig:1 ] ( the concept is analogous for hvg , although the set of admissible motifs is different in both cases ) . note that since can be understood as a discrete probability distribution and is therefore a vector with unit norm ( we use the norm here ) , the number of degrees of freedom of is ( again , as we will see below , in the case considered here it is even more reduced as the number of admissible motifs will be less than 8) .+ in a recent work we introduced the concept of sequential hvg motifs and advanced a theory to compute in an exact way in the case of the hvg .it was shown that the motif statistic was useful to discriminate across different types of dynamics .the case of uncorrelated noise was shown to yield a universal motif profile , independent of the marginal distribution of the i.i.d .process and this enabled the definition of randomness test .we also found for some deterministic dynamics some _ forbidden motifs _ , which represented a persistent characteristic to test the randomness of a process ( note that if a motif of size does nt occur , then also all the motifs of size which incorporate that motif wo nt occur either ) .since vg and hvg motif profiles are a temporally constrained feature ( they are evaluated along consecutive nodes on the hamiltonian path ) its extraction can be seen as a process of dynamic symbolization . under this perspective , the relation between hvg motifs of size and the so - called _ ordinal patterns _ ( ops ) was acknowledged in .-ops are symbols extracted from a time series representing the possible ranking output of consecutive data and are extracted from a specific time series by comparing the value of all the set of consecutive data along the series .it was not unexpected to find a link between hvg -motifs and -ops as hvg is known to be an order statistic , much as ops .indeed , in the particular case of a time series for which data do nt repeat there exists a mapping between each appearing hvg motif and a specific set of ordinal patterns ; in this scenario the forbidden motifs selected by the horizontal visibility are , in general , set of the so called _ forbidden ordinal patterns _ .of course both vg and hvg motifs analysis can be applied without requiring any further assumption to time series taking values from finite sets ( namely when ) , while the ordinal patterns approach -based uniquely on the ranking statistics- require further assumptions in that case . herewe focus in the _ natural _ version of the algorithm and explore vg motifs instead .as vgs are not invariant under monotonic transformations in the series , in general they depend on the marginal probability distribution of the time series and are not an order statistic . accordingly , there is no obvious correspondence between -ops and vg -motifs and both approaches in principle represent two independent symbolization methods that encode temporal information in a different way . in what follows we recall the theoretical framework for hvg motifs and in the next section we extend this theory to deal with vg motifs ..the set of size-4 hvg motifs are defined according to a set of relations between 4 arbitrary consecutive data , ] , with invariant density . asthis process is deterministic , it fulfils a trivial markov property such that .the hvg motif profile for this process was computed exactly in , here we compute the vg motif profile . before proceeding to compute each probability contribution , it is important to highlight a subtle point . since for this process ] .as we will see in a moment , this is already taken into account implicitly in the computation of each integral and therefore one can use the ( simpler ) inequality set for unbounded variables given in table [ tab:1 ] . + we start by computing : which gives the following conditions : + + + which are satisfied for ] , and this is indeed the reason why we do nt need to use in this case the inequality set for bounded variables .we thus have }\left(\frac{1}{2},\frac{1}{2}\right)\simeq0.0591\ ] ] as by construction , we proceed by calculating : which gives the following conditions : + + + which are satisfied for \cup[0.75,0.929] ] , and therefore }\left(\frac{1}{2},\frac{1}{2}\right)\simeq0.3333\ ] ] for we have which gives the following conditions : + + + which are satisfied for ] , and thus }\left(\frac{1}{2},\frac{1}{2}\right)+b_{\left[0.95,1\right]}\left(\frac{1}{2},\frac{1}{2}\right))\simeq0.2741\ ] ] finally , by construction .altogether , we find the vg motif profile of a fully chaotic logistic map note that while the result is in this case an approximation , our theory allows for numerical estimates with arbitrary precision ( the result is not exact because the location of fixed points of the map is only approximate , although this approximation is arbitrarily close to the true values ) .for white uniform noise , ] ) , meaning that white , uniform noise has a vg motif profile which is invariant under various transformations in the original distribution of the time series .this is not a trivial property and is indeed a peculiarity of the uniform distribution , in other words the vg motif profile of white noise extracted a from bounded distribution _ generally _ depends on the bounds of the distribution .for standard white gaussian noise , the probability density and transition probability are given by the component of is given by where are now the top and bottom conditions for the variable in motif reported in table for unbounded variables [ tab:1 ] .the integrals can be evaluated numerically up to arbitrary precision and they give the following results at odds with what happens for hvg motifs , this result is different from the benchmark result for uniformly distributed white noise , thus there is not a universal vg motif profile for white noise as previously anticipated .gaussian colored ( red ) noise with exponentially decaying correlations can be simulated using an process : where is gaussian white , and is a parameter that tunes the correlation .the auto - correlation function decays exponentially , where the characteristic time .this model is markovian and stationary , with a probability density and transition probability given by }{\sqrt{2\pi ( 1-r^2)}}\ ] ] the component of is given by where , again , are the top and bottom conditions for the variable in motif reported in table table [ tab:1 ] .once set the parameter the profile can be evaluated numerically up to arbitrary precision ; here we give the profile for three possible values and in all these examples , theoretical results are in very good agreement with results obtained with numerical simulations reported in figure [ fig:1_2 ] . differently from the hvg motifs , vg motifs statisticsdoes not depend uniquely on the ranking statistics of the data and therefore the vg motif profile could be able in principle to discriminate white noises with different marginals . in the latter sections we have been able to distinguish between gaussian and uniform white noise . in figure [ fig:2 ]we summarize the motif frequencies of vg motifs forming , extracted from i.i.d .series with different marginals : ;\quad f(x_i)\sim1\\ \text{gaussian}\rightarrow x_i\in(-\infty,\infty);\quad f(x_i)\sim\frac{\exp(-x_i^2/2)}{\sqrt{2\pi}}\\ \text{power - law}\rightarrow x_i\in[1,\infty);\quad f(x_i)\sim x_i^{-k } , \quad k= 2.5\\ \text{exponential}\rightarrow x_i\in[0,\infty);\quad f(x_i)\sim \exp(-k x_i ) , \quad k= 2.5\\ \end{cases}\ ] ] in every case we extract series of data . the universal profile obtained for hvg is also plotted for comparison . as expected , motif profiles are different for different marginals .motifs which are symmetric to each other ( 3 and 4 , 5 and 6 ) occur with equal probabilities , something that does nt occur when the series is chaotic ( eq .[ log ] ) . according to the values obtained for the components of , one can extract some heuristic conclusions : * , encode information on the marginal distribution of the process as well as its autocorrelation structure .* is null as this motif is not a vg .this is at odds with the hvg case , where this is an admissible motif provided the probability of finding consecutive equal data in the series is finite ( e.g. for discrete - valued series ) .* the motifs associated to the pairs ( , ) ( , ) have chiral symmetry .in other words , the motifs associated to and are isomorphic , the correct permutation being ( the same holds for and ) .accordingly , for any process which is statistically time reversible , we expect these probabilities to be equal .reversible processes include linear stochastic processes ( and both white and red noise belong to this family ) , while non - invertible chaotic processes are usually time irreversible ( the fully chaotic logistic map is an example ) .time irreversibility of the process is therefore encoded in these terms . * as this is not a vg and therefore does not appear ( not admissible ) .when dealing with empirical time series , the practitioner usually faces two different but complementary challenges , namely ( i ) the size of the series and ( ii ) the possible sources of measurement noise .the first challenge can be a problem when the statistics to be extracted from the series are strongly affected by finite - size effects , whereas for the second one needs to evaluate the robustness of those statistics against noise contamination . for a statistic or feature extracted from a time series to be not just informative but useful one usually requires that statistic or feature to be robust against both problems: it needs to have fast finite - size convergence speed and to be robust against reasonably large amounts of additive noise . + in it has been already shown that the hvg motif profile has good convergence properties respect to the series size and it is also robust respect to noise contamination . herewe explore these very same problems for the case of the vg motif profile and we make a detailed comparison of its performance with the hvg motif profile in a range of situations . in general , due to finite size effects , the estimated value of any feature fluctuates and deviates with respect to its asymptotic , expected value . for classical features such as the mean or the variance of a distribution ,these deviations are bounded and vanish with series size with a speed quantified by the central limit theorem .the estimation of the motif frequencies can be quantitative effected by finite - size fluctuations and one can even observe missing motifs ( motifs with estimated frequency ) which are not actually forbidden by the process but have not appeared by chance .this situation can be overemphasized in the presence of certain types of measurement noise . + following an approach analogous to the one followed for the forbidden ordinal patterns in ,we first perform a test to study the decay of missing motifs with the series size both in stochastic uncorrelated and correlated processes . in figure [ fig:3 ]panel a ) we plot , the average number of missing motifs in a series of size in the case of gaussian white noise and colored ( red ) gaussian noise ( for the red noise we consider the ar(1 ) process with correlation length discussed in section iii ) . for both types of noise decays exponentially to zero and already with a series of about 80 - 100 data points we can exclude the possibility of detecting missing motifs ( for both hvg and vg ) due to finite size fluctuations even in the case of correlated noise .+ as a second analysis , we explore the convergence speed of the estimated motif profile of uncorrelated and correlated stochastic series and of chaotic series ( fully chaotic logistic map ) of size to the asymptotic profile solution given in section iii . to do this we define the distance between the estimated -motif probabilities and the asymptotic value .we use norm and accordingly define in figure [ fig:3 ] panel b ) we show the trend of in log - log scale ( results are averaged over 300 realizations ) .the average distance decreases like a power - law for all the processes considered , in agreement with a central - limit - theorem - like argument . for a series of points is less than and the average distance for each of the single components is less than ( not shown ) .these results suggest that vg and hvg motif profiles have very good convergence properties and are thus robust against finite size fluctuations . to test and compare the robustness of vg and hvg motif profiles when the effect of noise contamination combines with the finite size fluctuations we consider the fully chaotic logistic map dynamics polluted with measurement ( additive ) noise in the two cases where is respectively white gaussian noise ( ) or colored gaussian noise ( ) . for both cases ] and ] ) .the robustness of the observed motif profile ] of the noise for the given .-\phi^4_m[\eta(\alpha)]|.\ ] ] with such definition we expect for low values of the nsr ( dominant signal , \simeq\phi^4_m[x] ] ) .furthermore , is affected by finite size effects : if we assume to have few realizations of the process of small series size , then we expect the variance calculated over the realizations to be high .in particular we have to consider that a resolution limit exists , such that when we can not say any more if the distance we measured is discriminating the signal from the noise or it is simply due to finite - size effects of the contamination noise .we define this threshold as the sum of the standard deviations of the estimated profile components ] at fixed size =6400= ( notice that for we missing motifs are not found anymore ) respectively for white gaussian noise and colored gaussian noise and for the hvg and the vg motif profile .the red solid line represents the resolution limit threshold for the process .we can see that the hvg and the vg motif profiles are more robust respect to noise contamination when this noise is correlated .in this situation the hvg motif profile seems to perform better than the vg motif profile , while in the case of uncorrelated gaussian noise the vg profile seems in turn slightly more robust than the hvg profile .+ the last step of this robustness analysis is to consider the usual situation where only very few realizations ( often a single one ) of the same process are available .our aim is to define a useful indicator which estimates for any given value of the size the maximum amount of noise contamination level for which a measure computed with only one realization of the process can be considered somewhat reliable .we define this to be the value of the nsr such that , and thus measures ( in units of noise - to - signal ratio ) the ( statistical ) reliability of the motif profile extracted form a single time series of size of the signal in the presence of measurement noise . + in figure[ fig:4 ] ( panel a ) we plot for white gaussian noise in the case of vg by considering the curve marked by orange squares and by taking the smallest value of nsr for which an orange error bar intersect the red line ( the blue box highlights the region ) .wee find , meaning that when working with a single time series of the process with size , the distance measured by using the vg motif profile is reliable up to a level of white gaussian noise contamination such that . in figure [ fig:4 ] ( panel b )we report the estimated value of for the vg and hvg motif profiles in the case of white gaussian noise and correlated gaussian noise in function of the series size = ( maximum noise contamination level considered was nsr()=8 ) .we can see that the motif profile is in general a robust measure respect to the combined effect of measurement noise and finite size : working with a single time series of only points of the process we can extract both the vg and the hvg motif profiles and expect those features to be informative respect to the underlying chaotic signal up to a level of measurement noise for which nsr=1.5 in the case of uncorrelated gaussian noise and nsr=3 in the case of correlated gaussian noise .+ also and as observed before ( figure [ fig:4]b ) ) , given the case of white gaussian noise contamination the vg motif profile ( orange squares ) seems to perform slightly better than the hvg motif profile ( green circles ) . for colored gaussian noisethe situation is the opposite and the hvg motif profile ( reversed gray triangles ) performs much better ( almost a gap of one unit of nsr for ) than the vg motif profile ( blue triangles ) . for both type of visibility graphs the motif profile is coherently more robust when polluted with colored noise than with white noise .this is probably due to the fact that white noise breaks up the correlation structure of the signal faster ( respect to the size ) than correlated noise .it is also interesting that both types of motif profiles are very sensible to the noise correlations although the different nature of the visibility algorithms .sequential visibility graph motifs are small subgraphs where nodes are in consecutive order within the hamiltonian path that appear with characteristic frequencies for different types of dynamics . this concept was introduced recently and a theory was developed to analytically compute the motif profiles in the case of horizontal visibility graphs ( hvgs ) . in this workwe have extended this theory to the realm of natural visibility graphs ( vgs ) , a family of graphs where the previous amount of known exact results was practically null .we have been able to give a closed form for the 4-node vg motif profile associated to general one dimensional deterministic and stochastic processes with a smooth invariant measure or continuous marginal distribution , for the cases where the variables belong to a bounded or unbounded interval . in the case where the time series is empirical and onedoes not have access to the underlying dynamics , the methodology still provides a linear time ( ) algorithm to estimate numerically such profile .we have shown that the theory is accurate and that vg motifs have similar robustness properties as hvg , yet they depend on the marginal distribution of the process and as such yield different profiles for different marginals .this is at odds with the results found for hvgs , where the motif profiles did not depend on the marginals as they behave as an order statistic .+ the detection of such motifs from a visibility graph extracted from a time series can be seen as a process of dynamic symbolization of the series itself , where the alphabet of symbols is composed by different subgraphs ( motifs ) which encode information about both data relations and their temporal ordering in their link structure . the deep similarity between hvg motifs and the so called ordinal pattern analysis -which holds mainly due to the fact that hvg is an order statistic- vanishes for vg motifs , which therefore stand as a complementary tool for time series analysis , specially relevant when the marginals play a role in the analysis . | the concept of sequential visibility graph motifs -subgraphs appearing with characteristic frequencies in the visibility graphs associated to time series- has been advanced recently along with a theoretical framework to compute analytically the motif profiles associated to horizontal visibility graphs ( hvgs ) . here we develop a theory to compute the profile of sequential visibility graph motifs in the context of natural visibility graphs ( vgs ) . this theory gives exact results for deterministic aperiodic processes with a smooth invariant density or stochastic processes that fulfil the markov property and have a continuous marginal distribution . the framework also allows for a linear time numerical estimation in the case of empirical time series . a comparison between the hvg and the vg case ( including evaluation of their robustness for short series polluted with measurement noise ) is also presented . |
as an offspring of the wide interest in frame representations and sparsity promoting techniques for data recovery , proximal methods have become popular for solving large - size non - smooth convex optimization problems .the efficiency of these methods in the solution of inverse problems has been widely studied in the recent signal and image processing literature ( see for instance and references therein ) . even if proximal algorithms and the associated convergence properties have been deeply investigated , some questions persist in their use for solving inverse problems .a first question is : how can we set the parameters serving to enforce the regularity of the solution in an automatic way ?various strategies were proposed in order to address this question , but the computational cost of these methods is often high , especially when several regularization parameters have to be set .alternatively , it has been recognized for a long time that incorporating constraints directly on the solutions , instead of considering regularized functions , may often facilitate the choice of the involved parameters .indeed , in a constrained formulation , the constraint bounds are usually related to some physical properties of the target solution or some knowledge of the degradation process , e.g. the noise statistical properties .note also that there exist some conceptual lagrangian equivalences between regularized solutions to inverse problems and constrained ones , although some caution should be taken when the regularization functions are nonsmooth ( see where the case of a single regularization parameter is investigated ) .another question is related to the selection of the most appropriate algorithm within the class of proximal methods according to a given application .this also raises the question of the computation of the proximity operators associated with the different functions involved in the criterion . in this context , the objective of this paper is to propose an efficient splitting technique for solving some constrained convex optimization problems of the form : [ p : gen ] where is a real hilbert space , and 1 . for every , is a bounded linear operator from to , 2 . for every , -\infty,+\infty\right]}} ] .indeed , the projection onto the convex set defined in often does not have a closed form expression . in the present work, we will show that : 1 . when the function in corresponds to a _decomposable loss _ , i.e. it can be expressed as the sum of functions evaluated over different blocks of the vector , the problem of computing the projection onto the associated convex set can be addressed by resorting to a splitting approach that decomposes the set into a collection of epigraphs and a half - space ; 2 . the projection operator associated with an epigraph ( namely the _ epigraphical projection _ ) has a closed form for some functions of practical interest , such as the absolute value raised to a power , the distance to a convex set and the -norm with ; 3 . in the context of image restoration ,regularity constraints based on total variation and non - local total variation can be efficiently handled by the proposed epigraphical splitting , which significantly speeds up the convergence ( in terms of execution time ) with respect to standard iterative solutions .the paper is organized as follows . in section[ sec : rec ] , we review the algorithms which are applicable for solving large - size convex optimization problems , so motivating the choice of proximal methods , and we review the variable - splitting techniques commonly used with these methods . in order to deal with a constraint expressed under the form , we propose in section [ sec : proposed ] a novel splitting approach involving an epigraphical projection .in addition , closed form expressions for specific epigraphical projections are given .experiments in two different contexts are presented in section [ sec : exp ] .the first ones concern an image reconstruction problem , while the second ones are related to pulse shape design for digital communications .finally , some conclusions are drawn in section [ sec : con ] . + * notation * : let be a real hilbert space endowed with the norm and the scalar product . denotes the set of proper lower - semicontinuous convex functions from to -\infty,+\infty\right]}} ] , which has a closed - form expression for some specific choices of , such as circulant matrices involved in image restoration .the solution that we will propose in this work also introduces auxiliary variables . however , our objective is not to deal with linear transformations of the data but with a projection which does not have a closed - form expression .consequently , the proposed solution departs from the usual splitting methods , in the sense that our approach leads to a collection of epigraphs and a half - space constraint sets , while the usual splitting techniques yield linear constraints .[ sec : spl ] we now turn our attention to convex sets for which the associated projection does not have a closed form and we show that , under some appropriate assumptions , it is possible to circumvent this difficulty . in problem[ p : gen ] , assume that denotes such a constraint and that it can be modelled as : for every , where .hereabove , the generic vector has been decomposed into blocks of coordinates as follows ^\top\ ] ] and , for every , and is a function in such that .the idea underlying our approach consists of introducing an auxiliary vector , so that constraint can be equivalently rewritten as align & _= 1^l_1 _ 1^ ( ) _ 1,[e : const1 ] + & ( \{1, ,l_1})h_1^()(y^ ( ) ) _ 1^()[e : const2 ] .let us now introduce the closed half - space of defined as with , and the closed convex set then , constraint means that , while constraint is equivalent to . in other words ,the constraint can be split into the two constraints and , provided that an additional vector is introduced in problem [ p : gen ] .the resulting criterion takes the form : [ p : epi ] note that the additional constraints can be easily handled by proximal algorithms as far as the projections onto the associated constraint sets can be computed . in the present case , the projection onto is well - known , whereas the projection onto is given by where , vector is blockwise decomposed as ^\top ] .assume that if -\infty,0] ] , then , for every , where is the unique solution on of .since is an even function , is an odd function ( * ? ? ?* remark 4.1(ii ) ) . in the following , we thus focus on the case when ,+\infty\right[}} ] , then .when , and , from ( * ? ? ?* example 4.6 ) , it can be deduced that when , is differentiable and , according to , is uniquely defined as where , according to ( * ? ? ?* corollary 2.5 ) , .this allows us to deduce that .let us now focus on the case when ,{\ensuremath{{+\infty}}}[ ] , it can be deduced from ( * ? ? ?* corollary 2.5 ) , that . since , yields .on the other hand if , as the proximity operator of a function from to is continuous and increasing ( * ? ? ?* proposition 2.4 ) , .since is differentiable in this case , and , allows us to deduce that is the unique value in satisfying .it can be concluded that , when ,{\ensuremath{{+\infty}}}[ ] , \zeta^{(\ell ) } < 0 \beta^{(\ell ) } = 1 ] .the resulting epigraphical projection is given below .+ [ ex : norm_l2 ] assume that is given by .+ then , for every , where .+ the epigraph of the euclidean norm is the so - called lorentz convex symmetric cone and the above result is actually known in the literature . as it will be shown in section [ sec : exp ] , this expression of the epigraphical projection is useful to deal with multivariate sparsity constraints or total variation bounds , since such constraints typically involve a sum of functions like composed with linear operators corresponding to analysis transforms or gradient operators . * * infinity norms defined as : for every and , where ,+\infty\right[}}^{m^{(\ell)}} ] , which is given by . then , the problem reduces to which is also equivalent to calculate , where is such that , for every , by using ( * ? ? ?* proposition 12 ) , we have . the function belongs to since for every , is finite convex and is finite convex and increasing on .in addition , is differentiable and it is such that , for every and every , for every , as is characterized by , there exists such that and this yields , hence , and we have : . the uniqueness of satisfying this inequality follows from the uniqueness of . + when , the function in reduces to the standard infinity norm for which the expression of the epigraphical projection has been recently given in .note that this proposition can be employed to efficient deal with regularization which has attracted much interest recently .[ sec : exp ] in this section , we provide numerical examples to illustrate the usefulness of the proposed epigraphical projection method .the first presented experiment focuses on applications in image restoration involving projections onto -balls where . the second experiment deals with a pulse shape design problem based on proposition [ ex : epidistl ] .set .denote by the signal of interest , and by an observation vector such that .it is assumed that is a linear operator , is a decimation operator, and is a realization of a zero - mean white gaussian noise vector .the recovery of from the degraded observations is performed by following a variational approach which aims at solving the following problem ^{\overline{n}}}}}{\operatorname{minimize}}\;\;\norm}}{dax - z}^2 \quad\operatorname{s.t.}\quad \sum_{\ell = 1}^l \norm{\omega_\ell \, b_\ell \ , f \ , x}_p \le \eta,\ ] ] where with , is a real positive constant , and is the linear operator associated with an analysis transform .furthermore , for every , is a _ block - selection linear operator _ which selects a block of data from its input vector. for every , denotes an diagonal matrix of real positive weights .the term is the _ data fidelity _ corresponding to the negative log - likelihood of .the bounds and allow us to take into account the _ value range _ of each component of .the second constraint involved in problem promotes solutions having a sparse analysis representation .indeed , it reduces to the weighted -norm criterion found in when each block reduces to a singleton ( i.e. , and , for every , and ) .it captures the criteria present in when .it matches the criterion proposed in when .note that overlapping blocks in constraint are dealt with by increasing the dimensionality of the problem ( through the linear transform ) and then using an usual non - overlapping block selection operator ( denoted ) .let us define and where and the same decomposition as in is performed .then , it can be observed that problem is a particular case of problem [ p : gen ] where , , , , , is the above -ball , and ^{\overline{n}} ] generated by m+lfbf or sdmm is guaranteed to converge to a ( global ) minimizer of problem . in the context of image restoration ,the quality of the results obtained through a variational approach strongly depends on the ability to model the regularity present in images . since natural images are often piecewise smooth , popular regularization models tend to penalize the image gradient . in this regard , _ total variation _ ( tv ) has emerged as a simple , yet successful , convex optimization tool .however , tv fails to preserve textures , details and fine structures , because they are hardly distinguishable from noise . to improve this behaviour ,the tv model has been extended by using a non - locality principle .another approach to overcome these limitations is to replace the gradient operator with a frame representation which yields a more suitable sparse representation of the image .the connections between these two different approaches have been studied in .it is still unclear which approach leads to the best results .however , there are some evidences that _ non - local _ ( nl ) tv may perform better in some image restoration tasks .we thus focus our attention on nltv - based constraints , although our proposed algorithm is quite general and it can also be adapted to frame - based approaches . by appropriately selecting the operators , and in problem , we can integrate the nltv measures in a constrained convex optimization approach . in our experiments , we propose to evaluate the performances of two nltv constraints that constitute particular cases of the one considered in when .they are described for 2d data in the following . *_ -nltv _ this constraint has the form where is the _ neighbourhood support _ at position and is the set of positions located into a window centered at , where is odd .this constraint is a particular case of the one considered in where and is a concatenation of discrete difference operators with .more precisely , for every , is a 2d filter with impulse response : for every , in addition , for every , , selects the components of corresponding to differences , and the positive weights are gathered in the diagonal matrix . * _ -nltv_ we consider the following constraint we proceed similarly to the previous constraint , except that the -norm is now substituted for the -norm .note that the classical isotropic tv constraint ( designated by -tv in the following ) constitutes a particular case of the -nltv one , where each neighbourhood only contains the horizontal / vertical neighbouring pixels ( ) and the weights are .similarly , the -tv constraint is a special case of the -nltv one . to set the weights , we got inspired from the non - local means approach originally described in . here , for every and , the weight depends on the similarity between patches built around the pixels and of the image . since our degradation process involves some missing data , a two - step approach has been adopted . in the first step ,the -tv approach is used in order to obtain an estimate of the target image .this estimate is subsequently used in the second step to compute the weights through a _ self - similarity _ measure , yielding where , ,+\infty\right[}} ] . for the -ball projectors needed by the direct method, we used the software publicly available on - line .note that sdmm requires to invert the matrix , which we address by resorting to the solution proposed in . in order to make the operators and diagonalizable in the dft domain , a periodic extension of the image is performed . in practice, the constraint bound may not be known precisely . although it is out of the scope of this paper to devise an optimal strategy to set this bound , it is important to evaluate the impact of its choice on our method performance . in the following ,we compare the epigraphical approach with the direct computation of the projections ( via standard iterative solutions ) for different choices of regularization constraints and values of . * _ total variation _ tables [ tab : tv_eta ] and [ tab : tv_eta_inf ] report a comparison between the direct and epigraphical methods for -tv and -tv , respectively . for more readability ,the values of are expressed as a multiplicative factor of the -tv - semi - norm of the original image .the convergence times indicate that the epigraphical approach yields a faster convergence than the direct approach for sdmm and m+lfbf .moreover , the numerical results show that errors within from the optimal value for lead to snr variations within .[ fig : tv_prof : time ] and [ fig : tv_prof_inf : time ] show the relative error }-x^{[\infty]}}/\norm{x^{[\infty]}} ] denotes the solution computed after a large number of iterations ( typically , 5000 iterations ) .the dashed line presents the results for the direct method while the solid line refers to the epigraphical one .these plots show that the epigraphical approach is faster despite it requires more iterations in order to converge .this can be explained by the computational cost of the subiterations required by the direct projections onto the -ball .+ + c@^c @ ^c@^c ^c@^c @^c @^c@^c @ ^c@^c ^c@^c @^c @ ^c & & & & + ( r)10 - 15 & & & & & & & & & & + ( r)3 - 4(lr)5 - 6(r)10 - 11(lr)12 - 13 & & # iter . & sec .& sec . & & & & # iter . & sec . &# iter . & sec . & & + 0.45 & 19.90 0.733 & 107 & 6.07 & 174 & 2.03 & & 2.99 & & 113 & 6.15 & 182 & 3.49 & & 1.76 + 0.50 & 20.18 0.745 & 117 & 6.95 & 159 & 1.95 & & 3.57 & & 116 & 6.97 & 168 & 3.44 & & 2.03 + * * 0.56 & 20.23 0.745 & 129 & 8.36 & 153 & 1.90 & & 4.41 & & 124 & 8.17 & 159 & 3.01 & & 2.72 + 0.62 & 20.16 0.737 & 141 & 9.44 & 155 & 1.83 & & 5.16 & & 131 & 8.62 & 159 & 3.26 & & 2.65 + 0.67 & 20.00 0.724 & 154 & 10.20 & 162 & 2.17 & & 4.71 & & 140 & 10.00 & 164 & 2.84 & & 3.52 + + [ tab : tv_eta ] + + c@^c @ ^c@^c ^c@^c @^c @^c@^c @ ^c@^c ^c@^c @^c @ ^c & & & & + ( r)10 - 15 & & & & & & & & & & + ( r)3 - 4(lr)5 - 6(r)10 - 11(lr)12 - 13 & & # iter . & sec . &# iter . & sec . & & & & # iter . & sec& # iter . & sec . & & + 0.45 & 19.52 0.726 & 160 & 312.55 & 231 & 3.89 & & 80.43 & & 183 & 347.10 & 252 & 6.43 & & 53.96 + 0.50 & 19.71 0.734 & 168 & 342.01 & 215 & 3.75 & & 91.31 & & 185 & 368.24 & 236 & 5.83 & & 63.17 + * * 0.56 & 19.71 0.728 & 180 & 373.60 & 211 & 3.49 & & 106.93 & & 189 & 386.29 & 229 & 5.53 & & 69.91 + 0.62 & 19.59 0.715 & 196 & 412.68 & 216 & 3.67 & & 112.50 & & 198 & 411.04 & 229 & 5.86 & & 70.15 + 0.67 & 19.39 0.698 & 211 & 448.77 & 223 & 3.76 & & 119.27 & & 207 & 437.66 & 234 & 5.76 & & 75.96 + + [ tab : tv_eta_inf ] + + c@^c @ ^c@^c ^c@^c @^c @^c@^c @ ^c@^c ^c@^c @^c @ ^c & & & & + ( r)10 - 15 & & & & & & & & & & + ( r)3 - 4(lr)5 - 6(r)10 - 11(lr)12 - 13 & & # iter . & sec . &# iter . & sec . & & & & # iter . & sec . & # iter . & sec . & & + + 0.43 & 20.82 0.757 & 208 & 20.67 & 211 & 10.93 & & 1.89 & & 82 & 6.95 & 93 & 3.76 & & 1.85 + 0.49 & 20.97 0.765 & 167 & 16.84 & 177 & 9.01 & & 1.87 & & 75 & 6.61 & 83 & 3.47 & & 1.91 + 0.54 & 21.02 0.767 & 147 & 15.31 & 157 & 7.93 & & 1.93 & & 71 & 6.45 & 77 & 3.15 & & 2.04 + 0.59 & 20.98 0.764 & 134 & 14.44 & 148 & 7.67 & & 1.88 & & 72 & 6.58 & 77 & 3.24 & & 2.03 + 0.65 & 20.88 0.757 & 133 & 14.82 & 136 & 7.11 & & 2.08 & & 76 & 7.53 & 80 & 3.27 & & 2.30 + + 0.43 & 21.00 0.766 & 301 & 56.03 & 343 & 45.18 & & 1.24 & & 82 & 8.51 & 90 & 5.43 & & 1.57 + 0.49 & 21.15 0.773 & 260 & 49.03 & 302 & 39.64 & & 1.24 & & 75 & 7.90 & 81 & 4.90 & & 1.61 + * * 0.54 & 21.20 0.775 & 242 & 46.31 & 283 & 37.72 & & 1.23 & & 71 & 8.26 & 75 & 4.47 & & 1.85 + 0.59 & 21.17 0.773 & 231 & 46.20 & 268 & 36.56 & & 1.26 & & 70 & 7.94 & 74 & 4.49 & & 1.77 + 0.65 & 21.08 0.767 & 220 & 44.64 & 252 & 34.46 & & 1.30 & & 73 & 8.40 & 76 & 4.59 & & 1.83 + + [ tab : nltv3_2 ] + + c@^c @ ^c@^c ^c@^c @^c @^c@^c @ ^c@^c ^c@^c @^c @ ^c & & & & + ( r)10 - 15 & & & & & & & & & & + ( r)3 - 4(lr)5 - 6(r)10 - 11(lr)12 - 13 & & # iter . & sec . & # iter . & sec . & & & & # iter# iter . & sec . & & + + 0.43 & 20.78 0.762 & 434 & 1470.46 & 449 & 25.03 & & 58.76 & & 225 & 730.26 & 244 & 12.35 & & 59.15 + 0.49 & 20.86 0.764 & 395 & 1319.64 & 413 & 22.86 & & 57.72 & & 221 & 692.25 & 237 & 11.92 & & 58.08 + 0.54 & 20.83 0.760 & 363 & 1193.61 & 382 & 21.46 & & 55.62 & & 217 & 667.50 & 233 & 11.46 & & 58.22 + 0.59 & 20.73 0.752 & 340 & 1093.26 & 354 & 19.77 & & 55.30 & & 216 & 653.79 & 230 & 11.67 & & 56.01 + 0.65 & 20.58 0.740 & 322 & 1007.55 & 336 & 18.64 & & 54.06 & & 216 & 643.00 & 229 & 11.45 & & 56.18 + + 0.43 & 20.91 0.769 & 384 & 2069.62 & 452 & 64.42 & & 32.13 & & 233 & 863.01 & 252 & 18.47 & & 46.73 + 0.49 & 20.98 0.771 & 326 & 1700.34 & 412 & 58.66 & & 28.99 & & 231 & 822.06 & 247 & 18.36 & & 44.77 + * * 0.54 & 20.97 0.767 & 290 & 1476.98 & 389 & 55.35 & & 26.69 & & 229 & 787.61 & 245 & 17.90 & & 43.99 + 0.59 & 20.88 0.759 & 276 & 1336.16 & 374 & 52.64 & & 25.38 & & 230 & 772.42 & 245 & 17.57 & & 43.96 + 0.65 & 20.75 0.749 & 268 & 1220.14 & 362 & 51.45 & & 23.72 & & 231 & 760.86 & 245 & 17.81 & & 42.72 + + [ tab : nltv3_inf ] * _ -nltv _ table [ tab : nltv3_2 ] collects the results of -nltv for different values of neighbourhood size . to set the weights ,the first tv estimate is computed with .the convergence times show that the epigraphical approach is faster than the direct one for both considered algorithms .moreover , it can be noticed that errors within from the optimal bound value lead to snr variations within . in [ fig : nltv2_prof : time ] , a plot similar to those in figs .[ fig : tv_prof : time ] and [ fig : tv_prof_inf : time ] show the convergence profile .the epigraphical method requires about the same number of iterations as the direct one in order to converge .this results in a time reduction , as a single iteration of the epigraphical method is faster than one iteration of the direct method . * _ -nltv _ table [ tab : nltv3_inf ] and [ fig : nltv_inf_prof : time ] show the results obtained with the -nltv constraint . similarly to -tv , the epigraphical approach greatly speeds up the convergence times . in this section ,the quality of images reconstructed with our variational approach is evaluated for different choices of regularization constraints and comparisons are made with a state - of - the - art method .extensive tests have been carried out on several standard images of different sizes .the snr and ssim results obtained by using the various previously introduced tv - like constraints are collected in table [ tab : all ] .in addition , a comparison is performed between our method using an m+lfbf implementation and the gradient projection for sparse reconstruction ( gpsr ) method , which also relies on a variational approach .the constraint bound for both methods was hand - tuned in order to achieve the best snr values .the best results are highlighted in bold .a visual comparison is made in [ fig : tv_images ] , where two representative images are displayed .these results demonstrate the interest of considering non - local smoothness measures .indeed , nltv with -norm proves to be the most effective constraint with gains in snr and ssim ( up to 1.82 db and 0.042 ) with respect to -tv , which in turn outperforms gpsr. the better performance of nltv seems to be related to its ability to better preserve edges and thin structures present in images . in terms of computational time , gpsr is about twice faster than -nltv .our codes were developed in matlab , the operators and being implemented in c using mex files . in order to complete the analysis ,we report in [ fig : noise ] snr / ssim comparisons between -nltv and -tv for different blur and noise configurations .these plots show that -nltv provides better results regardless of the degradation conditions . [cols="<,^,>,>,>,>,^,>,>",options="header " , ]we have proposed a new epigraphical technique to deal with constrained convex variational formulations of inverse problems with the help of proximal algorithms . in this paper , our attention has been turned to constraints based on distance functions and weighted -norms with . in the context of 1d signals , we have shown that constraints based on distance functions are useful for pulse shape design . in the context of images ,we have used -norm constraints to promote block - sparsity of analysis representations .the obtained results demonstrate the better performance of non - local measures in terms of image quality .our results also show that the -norm has to be preferred over the -norm for image recovery problems . however , it would be interesting to consider alternative applications of -norms such as regression problems .furthermore , the experimental part indicates that the epigraphical method converges faster than the approach based on the direct computation of the projections via standard iterative solutions .parallelization of our codes should even allow us to accelerate them .note that , although the considered application involves two constraint sets , the proposed approach can handle an arbitrary number of convex constraints .the epigraphical approach could also be used to develop approximation methods for addressing more general convex constraints .10 p. l. combettes and j .- c .proximal splitting methods in signal processing . in h.h. bauschke , r. s. burachik , p. l. combettes , v. elser , d. r. luke , and h. wolkowicz , editors , _ fixed - point algorithms for inverse problems in science and engineering _ , pages 185212 .springer - verlag , new york , 2011 . c. chaux , m. el gheche , j. farah , j .- c .pesquet , and b. pesquet - popescu . a parallel proximal splitting method for disparity estimation from multicomponent images under illumination variation ., 47(3):167178 , november 2013 .s. ono and i. yamada .poisson image restoration with likelihood constraint via hybrid steepest descent method . in _ proc .acoust . , speech signal process ._ , pages x+5 , vancouver , canada , may 26 - 31 2013 . y. censor , w. chen , p. l. combettes , r. davidi , and g. t. herman . on the effectiveness of projection methods for convex feasibility problems with linear inequality constraints . , 51(3):10651088 , 2012 .i. yamada .the hybrid steepest descent method for the variational inequality problem over the intersection of fixed point sets of nonexpansive mappings . in_ inherently parallel algorithms for feasibility and optimization and their applications _ , pages 473504elsevier , 2001 .a. foi and g. boracchi .foveated self - similarity in nonlocal image filtering . in _ proc .spie electronic imaging 2012 , human vision and electronic imaging xvii _ ,volume 8291 , burlingame ( ca ) , usa , jan . 2012 .r. gaetano , g. chierchia , and b. pesquet - popescu .parallel implementations of a disparity estimation algorithm based on a proximal splitting method . in _ visual communication and image processing _ , san diego , usa , 2012 . | we propose a proximal approach to deal with a class of convex variational problems involving nonlinear constraints . a large family of constraints , proven to be effective in the solution of inverse problems , can be expressed as the lower level set of a sum of convex functions evaluated over different , but possibly overlapping , blocks of the signal . for such constraints , the associated projection operator generally does not have a simple form . we circumvent this difficulty by splitting the lower level set into as many epigraphs as functions involved in the sum . a closed half - space constraint is also enforced , in order to limit the sum of the introduced epigraphical variables to the upper bound of the original lower level set . in this paper , we focus on a family of constraints involving linear transforms of distance functions to a convex set or norms with . in these cases , the projection onto the epigraph of the involved function has a closed form expression . the proposed approach is validated in the context of image restoration with missing samples , by making use of constraints based on non - local total variation . experiments show that our method leads to significant improvements in term of convergence speed over existing algorithms for solving similar constrained problems . a second application to a pulse shape design problem is provided in order to illustrate the flexibility of the proposed approach . |
quantum key distribution can enable two authentic parties , the sender ( alice ) and the receiver ( bob ) , to obtain unconditional secret keys without restricting the power of the eavesdropper ( eve ) .on the premise of unconditional security , the higher key rate and the longer distance are constantly pursued . to enhance the tolerable excess noise of the continuous - variable quantum key distribution ( cv qkd ) ,the two - way cv qkd protocols are proposed , where bob initially sends a mode to alice , and alice encodes her information by applying a random displacement operator to the received mode and then sends it back to bob .bob detects both his original mode and received mode to decode alice s modulations .although the two - way cv qkd protocols can remarkably enhance the tolerable excess noise , it needs to implement the tomography of the quantum channels to analyze the security under general collective attack , which is complicated in practice .therefore , we proposed a feasible modified two - way protocol by replacing the displacement operation of the original two - way protocol with a passive operation on alice s side .however , the source noise and both detection efficiency and detection noise on bob s side are not considered in the modified protocol .it has been proved that adding a proper noise on bob s detection side in one - way cv qkd can enhance the tolerable excess noise and the secret key rate in reverse reconciliation .this idea has been applied to the original two - way protocol in , while the scheme did not consider the correlation between the two channels .the correlated noise affects the secret key rate . in this paper ,we apply the idea of adding noise to our modified two - way protocol to enhance the tolerable excess noise and the secret key rate . considering the correlation between the channels ,the security of the two - way cv qkd with added noise against entangling cloner collective attacks is analysed and numerically simulated .the entanglement - based ( eb ) scheme of the two - way cv - qkd protocol with added detection noise is shown in ( a ) , where the dashed box at is the added noise and the other part is our original two - way protocol . the added noise is equivalent to an einstein - podolsky - rosen ( epr ) pair with the variance of coupled into the channel by a beam splitter with the transmittance of . the protocol is described as follows ._ step one_. bob initially keeps one mode of an epr pair with the variance of while sending the other mode to alice through the forward channel . _step two_. alice measures one mode of her epr pair ( variance : ) to get the variables \{ , } with a heterodyne detection , and couples the other mode of her epr pair with the received mode from bob by a beam splitter ( transmittance : ) .one output mode of the beam splitter is measured with homodyne detection and the other output mode is sent back to bob through the backward channel . _step three_. with a beam splitter ( transmittance : ) , bob couples another epr pair ( variance : ) which is equivalent to the added noise with his received mode .the two modes and of this epr pair are measured .bob performs homodyne detections on both modes and to get the variables ( or ) and ( or ) , respectively ._ step four_. alice and bob implement the reconciliation and privacy amplification . in this step ,the measurement values of the modes , , , , and are used to estimate the channel s parameters and bob uses ( ) to construct the optimal estimation to alice s corresponding variables ( ) , where is the channel s total transmittance .the prepare - and - measure ( pm ) scheme of the two - way protocol can be equivalent to the eb scheme .in fact , alice heterodyning one half of the epr pair at is equivalent to remotely preparing a coherent state , and bob performing homodyne detection on one half of the epr pair at is equivalent to remotely preparing a squeezed state .the homodyne detection preceded by an epr pair coupled by a beam splitter at is equivalent to bob s real homodyne detection with efficiency and electronic noise .note that and quadratures are randomly measured in homodyne detection and only quadrature is analysed in the following .( a ) the eb scheme of protocol .bob keeps one half of the epr pair ( epr ) and sends the other half to alice .alice measures one mode of her epr pair ( ) and one mode from a beam splitter .the other mode from this beam splitter is returned back to bob .the letters ( e.g. ) beside arrows : the mode at the arrow ; e : eve s whole mode ; the dashed box at : the heterodyne detection ; the dashed box at : the added noise . ( b ) the equivalent scheme to ( a ) with postprocessing . bob uses a symplectic transformation to change the modes and into and .,width=396 ]first , we show that the gaussian attack is optimal to the two - way protocol in general collective attack . in ( a ) ,since all modes of alice and bob are measured , eve can get the purification of the state of alice and bob .in addition , the and quadratures of alice and bob s modes are not mixed via heterodyne or homodyne detection and alice and bob use the second - order moments of the quadratures to bound eve s information .therefore , the two - way protocol can satisfy the requirement of optimality of gaussian collective attack ( i.e. , continuity , invariance under local gaussian unitary and strong subadditivity ) . when the corresponding covariance matrix of the state is known for alice and bob, the gaussian attack is optimal .therefore , only eve s gaussian collective attack is needed to be considered in the following security analysis . in ( a ) , the secret key rate of the two - way protocol in reverse reconciliation is where is the reconciliation efficiency , [ is the mutual information between bob and alice ( eve ) , and are alice s variance and conditional variance , and are eve s von neumann entropy and conditional von neumann entropy on bob s data , respectively . in the following , and calculated by the methods in . for gaussian state , the entropy can be calculated from its corresponding covariance matrix . since the state is a pure state , then .the corresponding covariance matrix of the state is where is a identity matrix , the diagonal elements correspond to the variances of and quadratures of the modes , , , , and in turn , e.g. , and the nondiagonal elements correspond to the covariances between modes , e.g. .therefore , eve s entropy where and is the symplectic eigenvalue of which is the function of the element of , seen in [ appendixa ] .bob uses to estimate alice s variable , which is equivalent to that bob uses a symplectic transformation to change the modes and into the modes and where the quadrature of the mode is , as shown in ( b ) . since ( b ) is equivalent to ( a ) with postprocessing , we use ( b ) to calculate in the following .after the symplectic transformation , the corresponding covariance matrix of the mode is \gamma_{b_2b_1n_2n_1a_2a_1}[\gamma_k\oplus\mathbb{i}_4]^t,\ ] ] where , is a continuous - variable c - not gate since the state is a pure state when bob gets by measuring the modes , then .the corresponding covariance matrix of the state conditioned on is ^{mp}c_{b_4},\ ] ] where and are the corresponding reduced matrixes of the states and in , respectively , is their correlation matrix , and denotes the inverse on the range .therefore , we have where is the symplectic eigenvalue of which is the function of the element of , seen in [ appendixa ] .by substituting equations ( [ se ] ) and ( [ s(e|xbpb ) ] ) into equation ( [ kr ] ) , the secret key rate is obtained in experiment , alice and bob can calculate the element and of equations ( [ rhoab ] ) and ( [ gammconditionxb ] ) by the measurement values of the modes , , , , and .therefore , according to equation ( [ kr(amn ) ] ) , the secret key rate in general collective attack is obtained without the assumption that the two channels are uncorrelated .the analytic representations of equation ( [ kr(amn ) ] ) is too complex to give here .we give a numerical simulation in the following .the eb scheme of protocol against entangling cloner attacks on correlated channels . , : the modes introduced into the channels ; : half beam splitter ; : beam splitter .alice and bob are the same as ( a).,width=396 ] ( a ) tolerable excess noise as a function of the transmission distance for , , and protocols , where .( b ) tolerable excess noise as a function of the transmission distance for protocol , where .the curves of ( a ) and ( b ) are plotted for , and .,width=396 ] ( a ) secret key rate as a function of the transmission distance for , , , and protocols , where , , , and .( b ) optimal choice of the added noise .,width=396 ] ( a ) secret key rate as a function of the transmission distance for protocol , where , , , and .( b ) optimal choice of the added noise .,width=396 ] ( a ) tolerable excess noise as a function of the transmission distance for high modulation for protocol .( b ) secret key rate as a function of the transmission distance for high modulation for protocol , where .the curves of ( a ) and ( b ) are plotted for , , and .,width=396 ]for simplicity in numerical simulation , when there is no eve , the forward and the backward channels are assumed to be independent with the identical transmittances and noises referred to the input , where is the channel excess noises referred to the input . it is equivalent to eve implementing two independent collective entangling cloner attacks which are a gaussian collective attack investigated in detail in . when eve implements more complicated two - mode attack , the correlation between the two channels is induced .shows that eve implements two correlated entangling cloner attacks . on conditionthat eve introduces the equivalent variances of the modes and into the two channels , the noise referred to the input of the backward channel is , where the second item on the right - hand side is induced extra by the correlation between the two channels , i.e. , the part of the mode introduced into the backward channel correlating with the forward channel interferes with the mode from alice , is the coefficient representing the degree of the correlation , e.g. , represents that the two channels are uncorrelated .the added noise is .we can calculate the elements of equation ( [ rhoab ] ) \ } \mathbb{i},\nonumber \\ & \gamma_{n_2}=\{t\!_n v\!_n + t ( 1 - t\!_n ) [ v\!_a - t\!_a v\!_a +t t\!_a ( v + \chi ) + \chi_2 ] \ } \mathbb{i},\nonumber \\ & \gamma_{a_2}= [ t\!_a v\!_a + t ( 1- t\!_a ) ( v + \chi ) ] \mathbb{i},\nonumber \\ & c_1\!= -\eta c_6=t\sqrt{t\!_a t\!_n ( v^2 - 1 ) } \sigma_{\!z } , \nonumber \\ & c_2\!=\!\sqrt{(1\!-\ ! t\!_n ) t\!_n } \{v\!_n - t [ v\!_a - t\!_a v\!_a + t t\!_a ( v + \chi ) + \chi_2]\ } \mathbb{i } , \nonumber \\ & c_3\!=\frac{1}{\eta}c_8=\sqrt{(1- t\!_n ) ( v_n^2 - 1 ) } \sigma_{\!z } , \nonumber \\ & c_4\!=-\eta c_9=\!\!\!\sqrt{t ( \!1\!-\ ! t\!_a\ ! ) t\!_n } \left[\!\!\sqrt{t\!_a}v\!_a\ ! -\! t\ ! \sqrt{t\!_a}(\!v \!+\ !\chi ) \!-\!n_c\!\sqrt{t}\varepsilon\right ] \mathbb{i } , \nonumber \\ & c_5\!=-\eta c_{10}=\sqrt{t ( 1- t\!_a ) t\!_n ( v^2\!_a-1 ) } \sigma_{\!z } , \nonumber \\ & c_7\!=-\sqrt{t ( 1 - t\!_a ) ( v^2 - 1 ) } \sigma_{\!z } , \nonumber \\ & c_{11}\!=\sqrt{t\!_a ( v^2\!_a-1 ) } \sigma_{\!z},\end{aligned}\ ] ] and where the typical fiber channel loss is assumed to be 0.2 db / km . and are in shot - noise units . substituting equations ( [ elementgmmaab ] ) and ( [ iba ] ) into equation ( [ kr(amn ) ] ) , the optimal secret key rate and the optimal tolerable excess noise of the two - way protocol can be obtained by adjusting the added noise .when , the two channels are uncorrelated , which is equivalent to eve implementing two independent gaussian cloner attacks . for comparison , the heterodyne protocol ( ) and the homodyne protocol ( ) of one - way cv - qkd protocol with coherent state and the original modified two - way protocols and are also given in figures [ fig4tolerablenoiseuncorrelated](a ) and [ fig5kuncorrelated](a ) .( a ) shows the tolerable excess noise as a function of the transmission distance , where , , and .the proper added noise is chosen to make of protocol optimal .the numerical simulation result indicates that the tolerable excess noise of the two - way protocol with added noise is more than that without added noise and surpasses that of the one - way cv - qkd protocol .therefore , it indicates that properly added noise is useful to enhance in the two - way protocol .( b ) shows the tolerable excess noise of protocol with different , which indicates that the tolerable excess noise increases with the increase of .\(a ) shows the secret key rate as a function of the transmission distance , where , , , and . to make the secret key rate of optimal , the proper added noise is chosen , as shown in ( b ) . in ( a ), the simulation result indicates that the two - way protocol with added noise has higher secret key rate than that without added noise .especially , the achievable transmission distance of the two - way protocol is over 60 km when is , which is much longer than that of the one - way protocol .the reason is that the added noise not only lowers the mutual information between alice and bob , but also lowers that between bob and eve .when the effect on eve is more than that on alice and bob , the secret key rate is enhanced . when , the two channels are correlated .( a ) shows the secret key rate as a function of the transmission distance for protocol with different . considering the practical experiment , we choose , , , and .to make optimal , the proper added noise is chosen , as shown in ( b ) . in ( a ) , the simulation result indicates that the distance of the secret key distribution decreases with the increase of .the reason is that the correlation between the two channels induces the change of the excess noise in the backward channel , which affects the secret key rate .( a ) shows that the decrease of the secret key rate induced by this effect is small .in addition , comparing with the one - way protocol in ( a ) , despite the transmission distance of the two - way protocol decreases slightly due to the correlation , the performance of the two - way protocol is still far beyond that of the one - way protocols .( b ) shows that the optimal added noise decreases with the decrease of . in the following ,we compare the two - way protocol with the one - way protocols in high modulation .figures [ fig6ek](a ) and ( b ) show the tolerable excess noise and the secret key rate as a function of the transmission distance for high modulation , where , , and .the proper added noise is chosen to make the tolerable excess noise and the secret key rate of protocol optimal .the numerical simulation result indicates that both the tolerable excess noise and the secret key rate of the two - way protocol with added noise are much more than that of the one - way cv - qkd protocols for high modulation .in conclusion , we improve the two - way cv - qkd protocol by adding a proper noise on bob s detection side .the security of the two - way cv - qkd protocol with added noise in homodyne detection against general collective attack is analysed .the numerical simulation under the collective entangling cloner attack is given for the correlated and the uncorrelated channels .the simulation result indicates that despite the secret key rate for the correlated channels is slightly lower than that for the uncorrelated channels when eve inputs equivalent variance of the modes into the two channels , the performance of the two - way protocol is still far beyond that of the one - way protocols .in addition , the properly added noise is beneficial for enhancing the secret key rate and the tolerable excess noise of the two - way cv qkd .the optimal tolerable excess noise of the two - way cv qkd with added noise is much more than that of the one - way cv qkd . with the reasonable reconciliation efficiency of , the two - way cv qkd with added noise allows the distribution of secret keys over 60 km fibre distance , which is difficult to reach for the one - way cv - qkd protocols with gaussian modulation in experiment .this work is supported by the national science fund for distinguished young scholars of china ( grant no .61225003 ) , national natural science foundation of china ( grant no .61101081 ) , and the national hi - tech research and development ( 863 ) program .we use to denote the elements of the corresponding covariance matrix of a -mode state .the symplectic invariants of for are defined as where ( standing for the pauli matrix ) and is the principal minor of order of the matrix which is the sum of the determinants of all the submatrices of .the symplectic eigenvalues of the matrix corresponding to a four - mode state are the solution of the four - order equation on the symplectic invariants where from equation ( [ rhoab ] ) , the covariance matrix of the modes can be obtained by permuting the corresponding elements of . applying a unitary transformation to equation ( [ rhoab ] ), we can obtain where and /t_n$ ] , for .therefore , the eigenvalues of are , where are the eigenvalues of given by equation ( [ lamda1234 ] ) . the symplectic invariants of are denoted as for j=1 ... 5 .it can be proved that .therefore , one of the eigenvalues of is 1 and the others have the same forms of equation ( [ lamda1234 ] ) , which needs the replacement scarani v , bechmann - pasquinucci h , cerf n j , duek m , ltkenhaus n and peev m 2009 _ rev .phys . _ * 81 * 1301 weedbrook c , pirandola s , lloyd s and ralph t c 2010 _ phys . rev . lett . _* 105 * 110501 shen y , yang j and guo h 2009 _ j. phys .b : at . mol .phys . _ * 42 * 235506 shen y , peng x , yang j and guo h 2011 _ phys . rev . a _ * 83 * 052304 | we propose an improved two - way continuous - variable quantum key distribution ( cv qkd ) protocol by adding proper random noise on the receiver s homodyne detection , the security of which is analysed against general collective attacks . the simulation result under the collective entangling cloner attack indicates that despite the correlation between two - way channels decreases the secret key rate relative to the uncorrelated channels slightly , the performance of the two - way protocol is still far beyond that of the one - way protocols . importantly , the added noise in detection is beneficial for the secret key rate and the tolerable excess noise of this two - way protocol . with the reasonable reconciliation efficiency of , the two - way cv qkd with added noise allows the distribution of secret keys over 60 km fibre distance . |
in classification problems , the goal is to predict output labels for given input vectors . for this purpose ,a decision function defined on the input space is estimated from training samples .the output value of the decision function is used for the label prediction . in binary classification problems ,the label is predicted by the sign of the decision function .many learning algorithms use loss functions to measure the penalty of misclassifications .the decision function minimizing the empirical mean of the loss function over training samples is employed as the estimator .for example , hinge loss , exponential loss and logistic loss are used for support vector machine ( svm ) , adaboost and logistic regression , respectively . especially in the binary classification tasks , statistical properties of learning algorithms based on loss functions are well - understood due to intensive recent works .see for details . as another approach, the maximum - margin criterion is also applied for the statistical learning . under the maximum - margin criterion , the best separating hyperplane between the two output labelsis employed as the decision function . in hard - margin svm , a convex - hull of input vectors for each binary labelis defined , and the maximum - margin between the two convex - hulls is considered . for the non - separable case , -svm provides a similar picture . in -svm , the so - called reduced convex - hull which is a subset of the original convex - hullis used for the learning .a reduced convex - hull is defined for each label , and the best separating hyperplane between the two reduced convex - hulls is employed as the decision function .not only polyhedral sets such as the convex - hull of finite input points but also ellipsoidal sets are applied for classification problems . in this paper ,the set used in the maximum - margin criterion is referred to as _ uncertainty set_. this term is borrowed from robust optimization in mathematical programming .there are some works in which the statistical properties of the learning based on the uncertainty set are studied .for example , proposed minimax probability machine ( mpm ) using the ellipsoidal uncertainty sets , and studied statistical properties under the worst - case setting . in the statistical learning using uncertainty set ,the main concern is to develop optimization algorithms under the maximum margin criterion .so far , statistical properties of the learning algorithm using uncertainty sets have not been intensively studied compared to the learning using loss functions .the main purpose of this paper is to study the learning algorithm using the uncertainty set .we focus on the relation between the loss function and the uncertainty set .we show that the uncertainty set is described by using the conjugate function of the loss function . for given uncertainty set ,we construct the corresponding loss function .we study the statistical properties of the learning algorithm using the uncertainty set by applying theoretical results on the loss function approach .then , we establish the statistical consistency of learning algorithms using the uncertainty set .we point out that in general the maximum margin criterion for a fixed uncertainty set does not provide accurate decision functions .we need to introduce a parametrized uncertainty set by the one - dimensional parameter which specifies the size of the uncertainty set .we show that a modified maximum margin criterion with the parametrized uncertainty set recovers the statistical consistency .the paper is organized as follows . in section [ sec : preliminaries ] , we introduce the existing method based on the uncertainty set . in section [ sec : loss_and_uncertainty ] , we investigate the relation between loss functions and uncertainty sets .section [ sec : revision_uncertainty_set ] is devoted to illustrate a way of revising the uncertainty set to recover nice statistical properties . in section[ sec : learning_algorithm ] , we present a kernel - based learning algorithm with uncertainty sets . in section [ sec : statistical_properties ] , we prove that the proposed algorithm has the statistical consistency .numerical experiments are shown in section [ sec : numerical_studies ] .we conclude in section [ sec : conclusion ] .some proofs are shown in appendix .we summarize some notations to be used throughout the paper .the indicator function is denoted as , i.e. , equals if is true , and otherwise .the column vector in the euclidean space is described in bold face .the transposition of is denoted as .the euclidean norm of the vector is expressed as .for a set in a linear space , the convex - hull of is denoted as or .the number of elements in the set is denoted as .the expectation of the random variable w.r.t .the probability distribution is described as ] , when it is clear from the context .the set of all measurable functions on the set is denoted by or for short .the supremum norm of is denoted as .for the reproducing kernel hilbert space , is the norm of defined from the inner product on .we define as the input space and as the set of binary labels .suppose that the training samples are drawn i.i.d . according to a probability distribution on .the goal is to estimate a decision function from a set of functions , such that the sign of provides an accurate prediction of the unknown binary label associated with the input under the probability distribution . in other word , for the estimated decision function , the probability of is expected to be as small as possible . in this article , the composite function of the sign function and the decision function , , is referred to as classifier . in binary classification problems ,the prediction accuracy of the decision function is measured by the 0 - 1 loss which equals when the sign of is different from and otherwise .the average prediction performance of the decision function is evaluated by the expected 0 - 1 loss , i.e. , .\end{aligned}\ ] ] the bayes risk is defined as the minimum value of the expected 0 - 1 loss over all the measurable functions on , bayes risk is the lowest achievable error rate under the probability .given the set of training samples , , the empirical 0 - 1 loss is denoted by the subscript in is dropped if it is clear from the context . in general , minimization of is considered as a hard problem .the main difficulty is considered to come from non - convexity of the 0 - 1 loss as the function of .hence , many learning algorithms use a surrogate loss of the 0 - 1 loss in order to make the computation tractable .for example , svm uses the hinge loss , , and adaboost uses the exponential loss , .both the hinge loss and the exponential loss are convex in , and they provide an upper bound of the 0 - 1 loss .thus , the minimizer under the surrogate loss is also expected to minimize the 0 - 1 loss .the quantitative relation between the 0 - 1 loss and the surrogate loss was studied by .to avoid overfitting of the estimated decision function to training samples , the regularization is considered . by adding the regularization term such as the squared norm of the decision function to the empirical surrogate loss, the complexity of the estimated classifier is restricted .the balance between the regularization term and the surrogate loss is adjusted by the regularization parameter .then , the deviation of the empirical 0 - 1 loss and the expected 0 - 1 loss is controlled by the regularization .when both the regularization term and the surrogate loss are convex , the computational tractability of the statistical learning is retained . besides statistical learning using loss functions , there is another approach to the classification problems , i.e. , statistical learning based on the so - called _uncertainty set_. we briefly introduce the basic idea of the uncertainty set .we assume that is a subset of euclidean space . in robust optimization problems , the uncertainty set describes uncertainties or ambiguities included in optimization problems .the parameter in the optimization problem may not be precisely determined . instead of the precise information, we have an uncertainty set which probably includes the parameter in the optimization problem .the worst - case setting is employed to solve the robust optimization problem with the uncertainty set .the statistical learning with uncertainty set is considered as an application of the robust optimization to classification problems . in classification problems ,the uncertainty set is designed such that most training samples are included in the uncertainty set with high probability .we prepare an uncertainty set for each binary label . for example , and are the confidence regions such that the conditional probabilities , and , are equal to . as the other example , the uncertainty set ( resp . ) consists of the convex - hull of input vectors in training samples having the positive ( resp .negative ) label .the convex - hull of data points is used in hard margin svm . the ellipsoidal uncertainty set is also used for the robust classification under the worst - case setting . based on the uncertainty set , we estimate the linear decision function . here , we consider the _ minimum distance problem _ let and be optimal solutions of .then , the normal vector of the decision function , , is estimated by , where is a positive real number .figure [ fig : uncertaintyset_approach ] illustrates the estimated decision boundary . when both and are compact subsets satisfying , the estimated normal vector can not be the null vector .the minimum distance problem appears in the hard margin svm , -svm and the learning algorithms proposed by . in section [ subsec : uncertainty_set_nusvm ] , we briefly introduce the relation between -svm and the minimum distance problem . in minimax probability machine ( mpm )proposed by , the other criterion is applied to estimate the linear decision function , though the ellipsoidal uncertainty set plays an important role also in their algorithm . and . ]the minimum distance problem is equivalent with the maximum margin principle .when the bias term in the linear decision function is estimated such that the decision boundary bisects the line segment connecting and , the estimated decision boundary achieves the maximum margin between the uncertainty sets , . according to ,we explain how the maximum margin is connected with the minimum distance .suppose that and are convex subsets and that holds .then , the margin of two uncertainty sets along the direction of is given as the maximum margin criterion is described as the equality above follows from the minimum norm duality .we study the relation between loss functions and uncertainty sets .first , we introduce the relation in -svm according to and .then , we present an extension of -svm to investigate a generalized relation between loss functions and uncertainty sets .suppose that the input space is a subset of euclidean space .we consider the linear decision function , , where the normal vector and the bias term are to be estimated based on observed training samples . by applying the kernel trick , we obtain rich statistical models for the decision function , while keeping the computational tractability . in -svm ,the classifier is estimated as the optimal solution of where is a prespecified constant which has the role of the regularization parameter . as pointed out, the parameter controls the margin errors and number of support vectors . in -svm ,a variant of the hinge loss , , is used as the surrogate loss . in the original formulation of -svm , the non - negativity constraint , ,is introduced . as shown by , we can confirm that the non - negativity constraint is redundant .indeed , for an optimal solution , we have where the last inequality comes from the fact that the parameter , , is a feasible solution of . as a result, we have for .we briefly show that the dual problem of yields the minimum distance problem in which the reduced convex - hulls of training samples are used as uncertainty sets .see for details .the problem is equivalent with then , the lagrangian function is defined as where are non - negative lagrange multipliers . for the observed training samples ,we define and as the set of sample indices for each label , i.e. , by applying min - max theorem , we have where the last equality is obtained by changing the variable from to . for the positive ( resp .negative ) label , we introduce the uncertainty set ( reps . ) defined by the reduced convex - hull , i.e. , when the upper limit of is less than one , the reduced convex - hull is a subset of the convex - hull of training samples .we find that solving the problem is identical to solving the minimum distance problem under the uncertainty set of the reduced convex - hulls , the representation based on the minimum distance problem provides an intuitive understanding of the learning algorithm .we consider general loss functions , and study the relation between the loss function and the corresponding uncertainty set .again , the decision function is defined as on .let be a convex and non - decreasing function .for the training samples , , we propose a learning method in which the decision function is estimated by solving the regularization effect is introduced by the constraint , where is the regularization parameter which may depend on the sample size .the statistical learning using is regarded as an extension of -svm . to see this , we define .let be an optimal solution of for a fixed . by comparing the optimality conditions of and, we can confirm that the problem with has the same optimal solution as -svm . in the similar way as -svm, we derive the uncertainty set associated with the loss function in .we introduce the slack variables satisfying the inequalities .then , the lagrangian function of is given as where and are the non - negative lagrange multipliers .the optimality conditions , and the non - negativity of lead to the constraint on lagrange multipliers , we define the conjugate function of as then , by applying min - max theorem , we have in section [ sec : statistical_properties ] , we present a rigorous proof that under some assumptions on , the min - max theorem works in the above lagrangian function , i.e. , there is no duality gap . for each binary label , we define the parametrized uncertainty sets , ] , by =\bigg\{\sum_{i\in{m_o}}\alpha_i\x_i \,:\ , \alpha_i\geq0 , \,\sum_{i\in{m_o}}\alpha_i=1,\ \frac{1}{m}\sum_{i\in{m_o}}\ell^*(m\alpha_i)\leq{c } \bigg\}. \label{eqn : uncertainty - set}\end{aligned}\ ] ] then , the optimization problem in is represented by ,\,\z_n\in\ucal_n[c_n],\ c_p,\,c_n\in\rbb .\label{eqn : rcm - representation}\end{aligned}\ ] ] let and be the optimal solution of and in .let be an optimal solution of in .the saddle point of the above min - max problem provides the relation between the , and . some calculation yields that , when holds , any vector such that satisfies the kkt condition of . on the other hand ,when holds , is given by .hence , an optimal solution of the normal vector in the linear decision function is given as we show a sufficient condition that the equality holds .suppose that \cap\ucal_n[c_n] ] and ] is the optimal choice of the objective function in . in -svm with a small , the reduced convex - hulls satisfy , and hence , and hold .the bias term in the linear decision function is not directly obtained from the optimal solution of without knowing the explicit form of the loss function . a simple way of estimating the bias term is to choose , which provides the decision boundary bisecting the line segment connecting and . in the learning algorithm proposed in section [ sec : learning_algorithm ], the bias term is estimated by minimizing the error rate since the estimated normal vector is substituted in the above objective function , the optimization is tractable .based on the argument above , we propose the learning algorithm using uncertainty sets in figure [ fig : simple_learning_algorithm ] .it is straightforward to apply the kernel method to the algorithm . in order to study statistical properties of the learning algorithm based on uncertainty sets , we need more elaborate description on the algorithm .details are presented in section [ sec : learning_algorithm ] .we show some examples of uncertainty sets associated with popular loss functions . in the following examples , the index sets , and , are defined by for the training samples , and let and be and , respectively .as explained above , the problem is reduced to -svm by defining .the conjugate function of is given as ,\\ \infty , & \alpha\not\in[0,2/\nu],\\ \end{cases } \end{aligned}\ ] ] and the associated uncertainty set is defined by &= \begin{cases } \displaystyle \bigg\ { \sum_{i\in{m_o}}\alpha_i\x_i\,:\,\sum_{i\in{m_o}}\alpha_i=1,\ , 0\leq\alpha_i\leq\frac{2}{m\nu},\,i\in{m_o } \bigg\ } , & c\geq0,\\ \displaystyle \emptyset , & c<0 . \end{cases}\quad\end{aligned}\ ] ] for , the uncertainty set consists of the reduced convex - hull of training samples , and it does not depend on the parameter .in addition , the negative is infeasible .hence , in the problem , optimal solutions of and are given as , and the problem is reduced to the simple minimum distance problem .[ example : uncertainty_truncated_quadratic ] now consider .the conjugate function is for , we define and as the empirical mean and the empirical covariance matrix of the samples , i.e. , suppose that is invertible .then , the uncertainty set corresponding to the truncated quadratic loss is given as & = \bigg\ { \sum_{i\in{m_o}}\alpha_i\x_i\,:\,\sum_{i\in{m_o}}\alpha_i=1,\ , \alpha_i\geq{0},\,i\in{m_o},\,\sum_{i\in{m_o}}\alpha_i^2\leq \frac{4(c+1)}{m } \bigg\}\\ & = \bigg\ { \z\in\conv\{\x_i : i\in{m_o}\}\,:\ , ( \z-\bar{\x}_o)^t\widehat{\sigma}_o^{-1}(\z-\bar{\x}_o)\leq\frac{4(c+1)m_o}{m } \bigg\}. \end{aligned}\ ] ] to prove the second equality , let us define the matrix . for satisfying the constraints , the equality holds , where .then , the singular value decomposition of the matrix and the constraint yield the second equality . a similar uncertainty setis used in minimax probability machine ( mpm ) and maximum margin mpm , though the constraint , , is not imposed in these learning methods .the loss function is used in adaboost .the conjugate function is equal to hence , the corresponding uncertainty set is defined as = \bigg\ { \sum_{i\in{m_o}}\alpha_i\x_i\,:\,\sum_{i\in{m_o}}\alpha_i=1,\ , \alpha_i\geq{0},\,i\in{m_o},\ , \sum_{i\in{m_o}}\alpha_i\log\frac{\alpha_i}{1/m_o}\leq{}c+1+\log\frac{m_o}{m}\bigg\ } \end{aligned}\ ] ] for . in the uncertainty set , the kullback - leibler divergence from the weight to the uniform weight is bounded above . in this section, we derived parametrized uncertainty sets associated with convex loss functions .inversely , if the uncertainty set is represented as the form of , there exists the corresponding loss function .when we consider statistical properties of the classifier estimated based on the uncertainty set , we can study the equivalent estimator derived from the corresponding loss function .we have many theoretical tools to analyze such estimators .however , if the uncertainty set does not have the expression of , the corresponding loss function would not exist . in this case , we can not apply the standard theoretical tools to understand statistical properties of learning algorithms based on such uncertainty sets .one way to remedy the drawback is to revise the uncertainty set so as to possess the corresponding loss function .the next section is devoted to study a way of revising the uncertainty set .given a parametrized uncertainty set , generally there does not exist the loss function which corresponds to the uncertainty set . in this section , we present a way of revising the uncertainty set such that there exists a corresponding loss function .we consider two kinds of representations for parametrized uncertainty sets : one is vertex representation , and the other is level - set representation .let and be index sets defined in , and we define and .for , let be a closed , convex , proper function on , and be the conjugate function of .the argument of is represented by .vertex representation _ of the uncertainty set is defined as = \bigg\ { \sum_{i\in{m_o}}\alpha_i\x_i\,:\ , l_o^*(\alphabold_o)\leq{c } \bigg\},\quad{}o\in\{p , n\}. \label{eqn : uncertainty - vertex - rep}\end{aligned}\ ] ] in example [ example : uncertainty_truncated_quadratic ] , the function is employed . on the other hand ,let us define as a closed , convex , proper function , and be the conjugate of .the _ level - set representation _ of the uncertainty set is defined by = \bigg\ { \sum_{i\in{m_o}}\alpha_i\x_i\,:\ , h_o^*\big(\sum_{i\in{m_o}}\alpha_i\x_i\big)\leq{c } \bigg\},\quad{}o\in\{p , n\}. \label{eqn : uncertainty - levelset - rep } \ ] ] the function may depend on the population distribution .we suppose that does not depend on the sample points , . in example[ example : uncertainty_truncated_quadratic ] , the second expression of the uncertainty set involves the convex function .this function does not satisfy the assumption , since depends on training samples via and .instead , the function with the population mean and the population covariance matrix meets the condition .when and are replaced with the estimated parameters based on a prior knowledge or a set of samples independent of the training samples , , the function with the estimated parameters still satisfies the condition we imposed above . in popular learning algorithms using uncertainty sets such as hard - margin svm , -svm and maximum margin mpm ,the decision function is estimated by solving the minimum distance problem with ] , where and are prespecified constants . in order to investigate the statistical properties of the learning algorithm using uncertainty sets , we consider the primal expression of a variant of the minimum distance problem . in section [ sec : loss_and_uncertainty ] , we derived the problem as the dual form of . here, we consider the following optimization problem to obtain the loss function corresponding to given uncertainty sets having the vertex representation , \cap\conv\{\x_i : i\in{m_p}\},\\ \displaystyle \phantom{\st}\ \z_n\in\ucal_n[c_n]\cap\conv\{\x_i : i\in{m_n}\}. \end{array } \label{eqn : opt - based - on - uncertainty_set}\end{aligned}\ ] ] in the above problem the constraints , , are added , since the corresponding uncertainty set has the same constraint .we derive the primal problem corresponding to via the min - max theorem .a brief calculation yields that is equivalent to if there is no duality gap , the corresponding primal formulation of is given as where is defined as for . in the primal expression , and regarded as the loss function for the decision function on training samples . in general , however , the loss function is not represented as the empirical mean over training samples .thus , we can not apply the standard theoretical tools to investigate statistical properties such as bayes risk consistency for the learning algorithm based on or . on the other hand ,if the problem is described as the empirical loss minimization , we can study statistical properties of the algorithm by applying the statistical theory developed by . to linkthe uncertainty set approach with the empirical loss minimization , we consider a revision of the uncertainty set .we propose a way of revising uncertainty sets such that the primal form is represented as minimization of the empirical mean of a loss function .remember that the additivity of the function is kept unchanged in the conjugate function , i.e. , .revision of uncertainty set defined by vertex representation : : : suppose that the uncertainty set is described by . for , we define -dimensional vectors and . for the convex function , we define by the revised uncertainty set ,\,o\in\{p , n\} ] is defined as =\bigg\ { \sum_{i\in{m_o}}\alpha_i\x_i\,:\ , \sum_{i\in{m_o}}\alpha_i=1,\,\alpha_i\geq 0,\,i\in{m_o},\ , \frac{1}{m}\sum_{i\in{m_o}}\bar{\ell}^*(\alpha_im)\leq{c},\ , \bigg\}. \end{aligned}\ ] ] we apply the parallel shift of training samples so as to be or .we explain the reason why the revised uncertainty set is defined as above . in the revision ,the uncertainty set is kept unchanged , when the function is described in the additive form .the precise description is presented in the following theorem .[ theorem : conservation_law ] let be convex functions , and be the function defined by for given and .suppose that is a closed , convex , proper function such that and for hold . 1 .suppose that the equality holds for all non - negative .then , the equality holds .2 . suppose that the equality holds for all .then , the equality holds .we prove the first statement . from the definition of and the assumption on , the equality holds for .suppose .the assumption on and leads to .hence , we have .the second statement of the theorem is straightforward .theorem [ theorem : conservation_law ] implies that the transformation of to is a projection onto the set of functions with the additive form .in addition , the second statement of theorem [ theorem : conservation_law ] denotes that the projection is uniquely determined when we impose the condition that the values on the diagonal are unchanged .next , we explain the validity of the formula .we want to find a function such that is close to in some sense .we substitute into . in the large sample limit , is approximated by .suppose that is represented as .then , we obtain . for the revised uncertainty sets ] , the corresponding primal problem of ,\ \z_n\in\bar{\ucal}_n[c_n ] \label{eqn : revised - uncertainty - problem}\end{aligned}\ ] ] is given as the revision of the uncertainty sets leads to the empirical mean of the revised loss function .when we study statistical properties of the estimator given by the optimal solution of , we can apply the standard theoretical tools , since the objective in the primal expression is described by the empirical mean of the revised loss functions .we show some examples to illustrate how the revision of the uncertainty set works .[ example : quad - uncertainty - set - to - loss ] let be the convex function , where is a positive definite matrix .the revised function defined by is given as for .then , we have when both and are the identity matrix , the equality holds .let be .then , the revised uncertainty set is given as & = \bigg\ { \sum_{i\in{m_o}}\alpha_i\x_i\ , : \sum_{i\in{m_o}}\alpha_i=1,\,\alpha_i\geq0\,(i\in{m_o } ) , \sum_{i\in{m_o}}\alpha_i^2 \leq \frac{cm}{k } \bigg\}. \end{aligned}\ ] ] for , let and be the empirical mean and the empirical covariance matrix , if is invertible , we have = \bigg\ { \z\in\conv\{\x_i : i\in{m_o}\}\,:\ , ( \z-\bar{\x}_o)^t\widehat{\sigma}_o^{-1}(\z-\bar{\x}_o)\leq \frac{c{}mm_o}{k } \bigg\}. \end{aligned}\ ] ] in the learning algorithm based on the revised uncertainty set , the estimator is obtained by solving ,\,\z_n\in\bar{\ucal}_n[c_n ] \\ & \longleftrightarrow \min_{c_p , c_n,\z_p,\z_n } c_p+c_n+\frac{m^2\lambda}{4k}\|\z_p-\z_n\|\ \\st\ \z_p\in\bar{\ucal}_p\bigg[\frac{4c_p k}{m^2}\bigg],\ , \z_n\in\bar{\ucal}_n\bigg[\frac{4c_n k}{m^2}\bigg ] .\end{aligned}\ ] ] the corresponding primal expression is given as [ example : quad - levelset - uncertainty - set - to - loss ] we define for by where is the mean vector of the input vector conditioned on each label and is a positive definite matrix . in practice ,the mean vector is estimated by using a prior knowledge which is independent of the training samples .suppose that .then , for , the revision of leads to where and are constant numbers .thus , we have & = \bigg\ { \sum_{i\in{m_o}}\alpha_i\x_i \,:\ , \sum_{i\in{m_o}}\alpha_i=1,\,\alpha_i\geq0\ , ( i\in{m_o}),\ , \sum_{i\in{m_o}}\alpha_i^2\leq \frac{c - b_1}{mb_2 } \bigg\}\\ & = \bigg\ { \z\in\conv\{\x_i : i\in{m_o}\ } \,:\ , ( \z-\bar{\x}_o)^t\widehat{\sigma}_o^{-1}(\z-\bar{\x}_o)\leq m_o\cdot\frac{c - b_1}{mb_2 } \bigg\ } , \end{aligned}\ ] ] where and are the estimators of the mean vector and the covariance matrix based on training samples .the corresponding loss function is obtained in the same way as example [ example : quad - uncertainty - set - to - loss ] . figure [ fig : revised - ellipsoidal_uncertainty - set ]illustrates an example of the revision of the uncertainty set . in the left panel, the uncertainty set does not match the distribution of the training samples .the revised uncertainty set in the right panel seems to well approximate the dispersal of the training samples . [ cols="^,^ " , ]in this paper , we studied the relation between the loss function approach and the uncertainty set approach in binary classification problems .we showed that these two approaches are connected to each other by the conjugate property based on the legendre transformation .given a loss function , there exists a corresponding parametrized uncertainty set . in general , however , uncertainty set does not correspond to the empirical loss function .we presented a way of revising the uncertainty set such that there exists an empirical loss function .then , we proposed a modified maximum - margin algorithm based on the parametrized uncertainty set .we proved the statistical consistency of the learning algorithm .numerical experiments showed that the revision of the uncertainty set often improves the prediction accuracy of the classifier . in our proof of the statistical consistency ,the hinge loss used in -svm is excluded . proved the statistical consistency of -svm with a nice choice of the regularization parameter .we are currently investigating the relaxation of the assumptions of our theoretical result so as to include the hinge loss function and other popular loss functions such as the logistic loss .as for the statistical modeling , the relation between the loss function approach and the uncertainty set approach can be a useful tool . in optimization and control theory ,the modeling based on the uncertainty set is frequently applied to the real - world data ; see the modeling in robust optimization and related works .we believe that the learning algorithm with the revision of the uncertainty set can bridge a gap between statistical modeling based on some intuition and nice statistical properties of the estimated classifiers .tk was partially supported by grant - in - aid for young scientists ( 20700251 ) . atwas partially supported by grant - in - aid for young scientists ( 23710174 ) .ts was partially supported by mext kakenhi 22700289 and the aihara project , the first program from jsps , initiated by cstp .first , we prove the existence of an optimal solution . according to the standard argument on the kernel estimator, we can restrict the function part to be the form of then , the problem is reduced to the finite - dimensional problem , let be the objective function of .let us define be the linear subspace in spanned by the column vectors of the gram matrix .we can impose the constraint , since the orthogonal complement of does not affect the objective and the constraint in .we see that assumption [ assump : universal_kernel ] and the reproducing property yield the inequality . due to this inequality and the assumptions on the function , the objective function is bounded below by hence , for any real number , the inclusion relation holds .note that the vector satisfying and is restricted to a compact subset in .we shall prove that the subset is compact , if they are not empty .we see that the two sets above are closed subsets , since both and are continuous . by the variable change from to , is transformed to the convex function defined by the subgradient of diverges to infinity , when tends to infinity .in addition , is a non - decreasing and non - negative function .then , we have the same limit holds for .hence , the level set of is closed and bounded , i.e. , compact . as a result ,the level set of is also compact .therefore , the subset is also compact in .this implies that has an optimal solution .next , we prove the duality between and .since has an optimal solution , the problem with the slack variables , also has an optimal solution and the finite optimal value .in addition , the above problem clearly satisfies the slater condition ( * ? ? ? * assumption 6.2.4 ) . indeed , at the feasible solution , and , the constraint inequalities are all inactive for positive .hence , proposition 6.4.3 in ensures that the min - max theorem holds , i.e. , there is no duality gap .then , in the same way as , we obtain with the uncertainty set as the dual problem of .we show proofs of lemmas in section [ subsec : convergence+to+optimal_expected_loss ] .let be the subset , then we have .due to the non - negativity of the loss function , we have for given satisfying , we define the function by we derive a lower bound . since is a finite - valued convex function on , the subdifferential is given as formulas of the subdifferential are presented in theorem 23.8 and theorem 23.9 of .we prove that there exist and such that holds .since the second condition in assumption [ assumption : expectedloss_consistency ] holds for the convex function , the union includes all the positive real numbers .hence , there exist and satisfying and .then , for , the null vector is an element of .since is convex in , the minimum value of is attained at .define as a real number satisfying since is assumed , both and are less than due to the monotonicity of the subdifferential .then , the inequality holds for all and all such that .hence , for any measurable function and , we have as a result , we have .corollary 5.29 of ensures that the equality :f\in\hcal\}=\inf\{\ebb[\ell(\rho - yf(x))]:f\in{}l_0\ } \end{aligned}\ ] ] holds for any .thus , we have for any . then , the equality holds . under assumption[ assump : non - deterministic - assumption ] and assumption [ assumption : expectedloss_consistency ] , we have due to lemma [ lemma : risk_boundedness ] .then , for any , there exist and such that and hold . for all we have on the other hand, it is clear that the inequality holds .hence , eq . holds . under assumption[ assump : non - deterministic - assumption ] , the label probabilities , and , are positive .we assume that the inequalities hold . applying chernoff bound, we see that there exists a positive constant depending only on the marginal probability of the label such that holds with the probability higher than .lemma [ lemma : existence_opt_sol ] ensures that the problem has optimal solutions .the first inequality in , i.e. , , is clearly satisfied .then , we have from the reproducing property of the rkhss . the definition of the estimator and the non - negativity of yield that , we have next , we consider the optimality condition of . according to the calculus of subdifferential introduced in section 23 of , the derivative of the objective function with respect to leads to an optimality condition , the monotonicity and non - negativity of the subdifferential and the bound of lead to the above expression means that there exist numbers in the subdifferential such that the inequality holds , where denotes the -fold sum of the set . let be a real number satisfying , i.e. , all elements in are greater than .then , should be less than . in the same way , for satisfying , we have .the existence of and is guaranteed by assumption [ assumption : expectedloss_consistency ] .hence , the inequalities hold , in which is used in the second inequality .define as a real number such that inequalities in lead to hence , we can choose satisfying .suppose that holds for .then , the inequalities hold with the probability higher than for . by choosing an appropriate positive constant , we obtain .since holds for such that , we have the following inequality in the same way as the proof of lemma 3.4 in , hoeffding s inequality leads to the upper bound .is the direct conclusion of and .lemma [ lemma : convergence_regularized_exp_loss ] assures that , for any , there exists sufficiently large such that holds for all .thus , there exist and such that and hold for . due to the law of large numbers, the inequality holds with high probability , say , for . the boundedness property in lemma [ lemma : estimator_bound ] leads to for .in addition , by the uniform bound shown in lemma [ lemma : uniform_convergence_loss ] , the inequality holds with probability .hence , the probability such that the inequality holds is higher than for .let be .then , for any , the following inequalities hold with probability higher than for , the second inequality above is given as for a fixed such that , the loss function is classification - calibrated , since holds .hence in assumption [ assumption : loss_bayesrisk_consistency ] satisfies , for , and is continuous and strictly increasing in $ ] .in addition , for all and , the inequality -\inf_{f\in\hcal , b\in\rbb}\ebb[\ell(\rho - y(f(x)+b ) ) ] \end{aligned}\ ] ] holds .details are presented in theorem 1 and theorem 2 of .here we used the equality :f\in\hcal , b\in\rbb\ } = \inf\{\ebb[\ell(\rho - y(f(x)+b))]:f\in{}l_0,b\in\rbb\ } , \end{aligned}\ ] ] which is shown in corollary 5.29 of .hence , we have - \inf_{f\in\hcal , b\in\rbb}\ebb[\ell(\widehat{\rho}-y(f(x)+b))]\\ & = \rcal(\widehat{f}+\widehat{b},\widehat{\rho } ) - \inf_{f\in\hcal , b\in\rbb}\rcal(f+b,\widehat{\rho } ) , \end{aligned}\ ] ] since holds due to .we assumed that converges to in probability .then , for any , the inequality holds with high probability for sufficiently large .thus , converges to zero in probability .the inequality and the assumption on the function ensure that converges to in probability , when tends to infinity . as a result , for any , holds with probability higher than with respect to the probability distribution of , where satisfies for any .next , we study the relation between and . the sample size of is . for any fixed , we define the set of 0 - 1 valued functions , .the vc - dimension of equals to one . indeed , for two distinct points such that , the event such that and is impossible .hence , for any and any , the inequality holds with probability higher than with respect to the joint probability of training sample .note that depends only on , and the vc - dimension of .thus , is independent of the choice of .remember that depends only on the data set .due to the law of large numbers , the inequality holds with probability higher than with respect to the probability distribution of conditioned on .since the 0 - 1 loss is bounded , it is possible to choose independent of . from the uniform convergence property, the following inequality also holds with probability higher than with respect to the probability distribution of conditioned on the observation of .in addition , we have given the training samples satisfying , the inequalities hold with probability higher than with respect to the probability distribution of conditioned on the observation of .hence , as for the conditional probability , we have remember that and do not depend on .hence , as for the joint probability of and , we have the above inequality implies that converges to in probability , when and tend to infinity .for and , we can directly confirm that the lemma holds . in the following , we assume and .we consider the following optimization problem involved in , the objective function is a finite - valued convex function on , and diverges to infinity when tends to .hence , there exists an optimal solution .let be an optimal solution of .the optimality condition is given as we assumed that both and are positive and that holds .hence , both and should not be zero .indeed , if one of them is equal to zero , the other is also zero .hence , we have and .these inequalities contradict .then , we have and , i.e. , . in addition , we have since holds on , the second derivative of the objective in satisfies the positivity condition , for all such that and .therefore , is uniquely determined . for a fixed , the optimal solution can be described as the function of , i.e. , . by the implicit function theorem , is continuously differentiable with respect to . then , the derivative of is given as the convexity of for leads to hence , we have for and . as a result, we see that is non - decreasing as the function of .we use the result of . for a fixed ,the function is continuous for , and the convexity of leads to the non - negativity of .moreover , the convexity and the non - negativity of lead to for and , where and are positive for .the above inequality and the continuity of ensure that there exists satisfying for all such that .we define the inverse function by for . for a fixed ,the loss function is classification - calibrated .hence , lemma 3 in leads to the inequality for .define by from the definition of , is well - defined for all . since holds , we have .in addition , is non - decreasing as the function of .thus , we have for all and .then , we can choose it is straightforward to confirm that the conditions of assumption [ assumption : loss_bayesrisk_consistency ] are satisfied . d. j. crisp and c. j. c. burges . a geometric interpretation of -svm classifiers . in s.a. solla , t. k. leen , and k .-mller , editors , _ advances in neural information processing systems 12 _ , pages 244250 .mit press , 2000 .j. s. nath and c. bhattacharyya .maximum margin classifiers with specified false positive and false negative error rates . in c.apte , b. liu , s. parthasarathy , and d. skillicorn , editors , _ proceedings of the seventh siam international conference on data mining _ , pages 3546 .siam , 2007 . | in binary classification problems , mainly two approaches have been proposed ; one is loss function approach and the other is uncertainty set approach . the loss function approach is applied to major learning algorithms such as support vector machine ( svm ) and boosting methods . the loss function represents the penalty of the decision function on the training samples . in the learning algorithm , the empirical mean of the loss function is minimized to obtain the classifier . against a backdrop of the development of mathematical programming , nowadays learning algorithms based on loss functions are widely applied to real - world data analysis . in addition , statistical properties of such learning algorithms are well - understood based on a lots of theoretical works . on the other hand , the learning method using the so - called uncertainty set is used in hard - margin svm , mini - max probability machine ( mpm ) and maximum margin mpm . in the learning algorithm , firstly , the uncertainty set is defined for each binary label based on the training samples . then , the best separating hyperplane between the two uncertainty sets is employed as the decision function . this is regarded as an extension of the maximum - margin approach . the uncertainty set approach has been studied as an application of robust optimization in the field of mathematical programming . the statistical properties of learning algorithms with uncertainty sets have not been intensively studied . in this paper , we consider the relation between the above two approaches . we point out that the uncertainty set is described by using the level set of the conjugate of the loss function . based on such relation , we study statistical properties of learning algorithms using uncertainty sets . |
financial returns are known to be non - gaussian and exhibit fat - tailed distribution - .the fat tail relates to intermittency an unexpected high probability of large price changes , which is of utmost importance for risk analysis .the recent development of high - frequency data bases makes it possible to study the intermittent market dynamics on time scales of less than a day - . using foreign exchange ( fx ) intraday data , mller et al . showed that there is a net flow of information from long to short timescales , i.e. , the behavior of long - term traders influences the behavior of short - term traders .motivated by this hierarchical structure , ghashghaie et al . have discussed analogies between the market dynamics and hydrodynamic turbulence , and claimed that the information cascade in time hierarchy exists in a fx market , which corresponds to the energy cascade in space hierarchy in a three - dimensional turbulent flow .these studies have stimulated further investigations on similarities and differences in statistical properties of the fluctuations in the economic data and turbulence - .differences have also emerged .mantegra and stanley and arneodo et al . pointed out that the time evolution , or equivalently the power spectrum is different for the price difference ( nearly white spectrum ) and the velocity difference ( spectrum , i.e. , -law for the spectrum of the velocity ) .moreover , from a parallel analysis of the price change data with time delay and the velocity difference data with time delay ( equivalent to the velocity difference data with spatial separation under the taylor hypothesis ) , it was shown that the time evolution of the second moment and the shape of the probability density function ( pdf ) , i.e. , the deviation from gaussian pdf are different in these two stochastic processes . on the other hand , non - gaussian character in fully developed turbulence has been linked with the nonextensive statistical physics - . as dynamical foundation of nonextensive statistics ,beck recently proposed a new model describing hydrodynamic turbulence .the velocity difference of two points in a turbulent flow with the spatial separation is described by brownian motion ( an overdamped langevin equation ) in a power - law potential . assuming a -distribution for the inverse temperature, he obtained a tsallis distribution for .however , if we take into account the almost uncorrelated behavior of the price change - , the picture by means of the brownian motion seems to be more appropriate for the market data rather than turbulence . moreover , the description by the langevin equation is able to relate the pdf of the price change to that of the volatility , a quantity known as a measure of the risk in the market. thus we applied the model to fx market dynamics by employing the correspondence by ghashghaie et al .we substitute the fx price difference with the time delay for the velocity difference with the spatial separation .beck s model for turbulence then reads where is a constant , and is gaussian white noise corresponding to the temperature , satisfying .the ` force ' is assumed to be obtained by a power - law potential with an exponent , where and is a positive constant .that is , the system is subject to a restoring force proportional to the power of the price difference , besides the random force .especially when , the restoring force is linear to . under a constant temperature , eq .( [ dzdt ] ) leads to a stationary ( i.e. , thermal equilibrium ) distribution of as where is the inverse temperature .the ` local ' variance of , which is defined for a fixed value of , is obtained from the conditional probability in eq .( [ pzbeta ] ) as we define the volatility by the square root of the local variance of ( see eq .( [ v ] ) ) .when and , coincides with the temperature and the conditional pdf reduces to gaussian , while for , is proportional to .let us assume that the ` temperature ' of the fx market is not constant and fluctuates in larger time scales , and is , just as in beck s model for turbulence , -distributed with degree : where is the gamma function , is the average of the fluctuating and relates to the relative variance of : equation ( [ fbeta ] ) implies that the local variance fluctuates with the distribution .the conditional probability in eq .( [ pzbeta ] ) together with eq .( [ fbeta ] ) yields a tsallis - type distribution for the ultimate pdf of : where tsallis nonextensivity parameter is defined by which satisfies because of and . since , the distribution of exhibits power - law tails for large : .hence , the moment converges only for . in the limit of , in eq .( [ pz ] ) reduces to the canonical distribution of extensive statistical mechanics : .we have applied the present model to the same fx market data set as used in ref . ( provided by olsen and associates which consists of 1 472 241 bid - ask quotes for us dollar - german mark exchange rates during the period october 92 - september 93 ) .the volatility is often estimated by the standard deviation of the price change in an appropriate time window . employing this definition , we express the volatility in terms of the local standard deviation of the price change as herethe window size has been chosen as . since corresponds to ( see eq .( [ z2beta ] ) ) which is -distributed , can be explicitly obtained from the relative variance of using eq .( [ beta ] ) .thus , there is only one adjustable parameter among , because we have another relation , eq .( [ q ] ) . in other words ,the pdf , of the price change and the pdf , of the inverse power of the volatility are determined simultaneously once the value of has been specified .the pdf s with time delay varying from five minutes up to approximately six hours are displayed in fig.[tsallis ] together with theoretical curves obtained from eq .( [ pz ] ) . as the time scale increases , increases , while and decrease .the nonextensivity parameter tends to the extensive limit : as increases . using the same parameter values , the pdf s of compared in fig .[ x2 ] . the average ` temperature ' in the market increases with since is positive .( we obtained the scaling exponent , which is larger than 2/3 obtained for turbulence . ) in contrast , the fluctuation of the temperature increases with decreasing because the variance of the inverse temperature is proportional to .the smaller values of imply the stronger intermittency which occurs in small time scales .the intermittent character in the price change can be seen as a fat tail of in fig .[ tsallis ] . also in fig .[ x2 ] , the peak of shifts to smaller as decreases ( from d to a in fig .[ x2 ] ) , which means relatively high temperatures are realized more frequently in short time scales . it should be noted that the pdf in eq .( [ pz ] ) with reduces to student s -distribution , which has often been used to characterize the fat tails .when , there is no adjustable parameter because is decided from eq .( [ beta ] ) .the fx market system is then subject to a restoring force linear to the price change and the volatility is proportional to the temperature .we have found that the pdfs of and reproduce , although very roughly , the olsen and associates data points even if is fixed at .however , adjusting the parameter improved the line shape of in a range close to .the data points in fig .[ tsallis ] exhibit a cusp at for large time scales , which implies a singularity of the second derivative of the pdf at .the larger reduction of from unity leads to the stronger singularity .( note that the factor arises from . )thus the better fitting for large was obtained from a reduced value of .the trend of decrease in with increasing was observed for the turbulent flow as well .however , the deviation from is much smaller than the present case and the cusp is invisible .ghashghaie et al . have used a model for turbulence by castaing et al . , in which a log - normal distribution has been assumed for the local standard deviation of the price change .the present model reduces to the model by ghashghaie et al . if the stochastic process given in eq .( [ dzdt ] ) is assumed with ( the local variance of is then proportional to ) and the -distribution for ( the inverse of the local variance of ) is replaced by the log - normal distribution for ( the local standard deviation of ) .although no analytic expression like eq .( [ pz ] ) for is obtained , a similar qualitative explanation can be applied to their model : the volatility ( or equivalently , the square root of the temperature ) fluctuates slowly with a log - normal distribution , and the smaller time scale corresponds to the larger variance of the logarithm of the volatility .( the variance is denoted by in ref .however , the power - law behavior of the tail of the volatility distribution can be better described by the -distribution for the inverse of the variance .we have proposed the stochastic process described by eq .( [ dzdt ] ) for fx market dynamics in small time scales .in fact , eq . ( [ dzdt ] ) is the simplest stochastic process which can realize the thermal equilibrium distribution , eq .( [ pzbeta ] ) in the power - law potential .more realistic and more complicated processes that assure convergence to eq .( [ pzbeta ] ) at local temperatures might be possible . mantegna and stanleyhave proposed a different stochastic model of the price change , which is described by a truncated levy flight ( tlf ) .the model , yielding approximately a stable distribution , well reproduces the self - similar property of pdf at different time scales : 1 min to 1 000 min .however , the parameters and characterizing the stable distribution fluctuate for larger ( monthly ) time scales , where gives a measure of the volatility .in other words , the ultimate distribution ( let us denote it ) should be obtained , like eq .( [ pz ] ) , from the weighted average over these parameters .a difference between and is that has no cusp at : for , whereas , and the present fx date set indeed exhibits a cusp as seen in fig .[ tsallis ] .finally , a fundamental question has been left open : how to derive theoretically the increasing trend of fat - tailed character with decreasing time scale , i.e. , -dependence of the parameters ( ) . an attempt to derive the non - gaussian fat - tailed character in small time scaleswas made by friedlich et al .recently .they derived a multiplicative langevin equation from a fokker - planck equation and showed that the equation becomes more multiplicative and hence fat - tailed as decreases . clarifying the relation between their multiplicative langevin equation and eq .( [ dzdt ] ) ( the latter being rather simple although including the fluctuating temperature ) should be important as well as interesting .fa65 e. f. fama , j. business 38 ( 1965 ) 34 .ms00 r. n. mantegna , h. e. stanley , an introduction to econophysics : correlations and complexity in finance , cambridge univ .press , cambridge , 2000 .bg01 l. bauwens , p. giot , econometric modelling of stock market intraday activity , kluwer acad .publishers , boston , 2001 .gbp96 s. ghashghaie , w. breymann , j. peinke , p. talkner , y. dodge , nature 381 ( 1996 ) 767 .ms96 r. n. mantegna , h. e. stanley , nature 383 ( 1996 ) 587 .abc96 a. arneodo , j. -p .bouchaud , r. cont , j. -f .muzy , m. potters , d. sornette , cond - mat /9607120 .ms97 r. n. mantegna , h. e. stanley , physica a 239 ( 1997 ) 255 .ams98 a. arneodo , j. -f .muzy , d. sornette , eur .j. b2 ( 1998 ) 277 .fpr00 r. friedrich , j. peinke , ch .renner , phys .84 ( 2000 ) 5224 .bgt00 w. breymann , s. ghashghaie , p. talkner , int . j. theorfinance 3 ( 2000 ) 357 .gof97 c. a. e. goodhart , m. ohara , j. emp .finance 4 ( 1997 ) 73 .mdd97 u. a. mller , m. m. , dacorogna , r. d. dav , r. b. olsen , o. v. pictet , j. e. von weizscker , j. emp .finance 4 ( 1997 ) 213 .ms95 r. n. mantegna , h. e. stanley , nature 376 ( 1995 ) 46 .lgc99 y. liu , p. gopikrishnan , p. cizeau , m. meyer , c. -k .peng , h. e. stanley , phys .e 60 ( 1999 ) 1390 . hkk01 r. huisman , k. g. koedijk , c. j. m. kool , f. palm , j. business and economic statistics 19 ( 2001 ) 208 .fr95 u. frisch , turbulence : the legacy of a. n. kolmogorov , cambridge univ .press , cambridge , 1995 .cgh90 b. castaing , y. gagne , e. j. hopfinger , physica d 46 ( 1990 ) 177 .ts88 c. tsallis , j. stat .52 ( 1988 ) 479 .ww00 g. wilk , z. wodarczyk , phys .84 ( 2000 ) 2770 .aa01 t. arimitsu , n. arimitsu , prog .( 2001 ) 355 .be01 c. beck , phys .87 ( 2001 ) 180601 .be01a c. beck , physica a 295 ( 2001 ) 195 .bls01 c. beck , g. s. lewis , h. l. swinney , phys .63 e ( 2001 ) 035303 .ri89 h. risken , the fokker - planck equation : methods of solution and applications , 2nd edn , springer - verlag , berlin , 1989 .oa93 high frequency data in finance 1993 , olsen and associates , zurich . | we present a model of financial markets originally proposed for a turbulent flow , as a dynamic basis of its intermittent behavior . time evolution of the price change is assumed to be described by brownian motion in a power - law potential , where the ` temperature ' fluctuates slowly . the model generally yields a fat - tailed distribution of the price change . specifically a tsallis distribution is obtained if the inverse temperature is -distributed , which qualitatively agrees with intraday data of foreign exchange market . the so - called ` volatility ' , a quantity indicating the risk or activity in financial markets , corresponds to the temperature of markets and its fluctuation leads to intermittency . , foreign exchange market , volatility , tsallis distribution , -distribution , brownian motion |
data obtained from a physical system sometimes possess many characteristic length and time scales . in such cases , it is desirable to construct models that are effective for large - scale structures , whilst capturing small scales at the same time . modeling this type of data via diffusion type models may be well - suited in many cases .thus , multiscale diffusion models have been used to describe the behavior of physical phenomena in scientific areas such as chemistry and biology , ocean - atmosphere sciences , finance and econometrics . in many of these problems ,the noise is taken to be small because one may , for example , be interested in modeling ( a ) : rare transition events between equilibrium states of a rough energy landscape , or ( b ) : short time maturity asymptotics for fast mean reverting stochastic volatility models .see also for a thorough discussion on different mathematical and statistical modeling aspects of perturbations of dynamical systems by small noise .parameter estimation in multiscale models with small noise is a problem of great practical importance , due to their wide range of applications , but also of great difficulty , due to the different separating scales .the goal of this paper is to develop a theoretical framework for the estimation of unknown parameters in a multiscale diffusion model with vanishing noise .more specifically , let be given and consider the -dimensional process satisfying the stochastic differential equation ( sde ) +\sqrt{\epsilon}\sigma\left ( x_{t}^{\epsilon},\frac{x_{t}^{\epsilon}}{\delta}\right ) dw_{t},\hspace{0.2cm}x_{0}^{\epsilon}=x_{0 } , \label{eq : ldpanda1}\ ] ] where as , is an unknown parameter and is a standard -dimensional wiener process . the functions and are assumed to be smooth , in the sense of condition [ a : assumption1 ] , and periodic with period in every direction with respect to the second variable .the rate of convergence of and to zero determines the type of equation that one obtains in the limit .for example , if is of order 1 as goes to zero , then equation ( [ eq : ldpanda1 ] ) reduces to a deterministic ode that we obtain if we set equal to zero . on the other hand ,if is of order 1 as goes to zero , then homogenization occurs and this results to an equation with homogenized coefficients .when both parameters and go to zero together , then we need to consider three different regimes depending on how fast goes to zero relative to : we mention here that asymptotic problems for models like ( [ eq : ldpanda1 ] ) have a long history in the mathematical literature .we refer the interested reader to classical manuscripts such as for averaging and homogenization results and to the more recent articles for large deviations results and for importance sampling results on related rare event estimation problems .in ( [ eq : ldpanda1 ] ) we assume that the drift term , through the functions and , depends on a physical parameter . generally , from a statistical inference point of view , the main questions of interest are the following : 1 .how can one estimate the fast oscillating parameter and the intensity of the noise ?2 . how can one estimate the unknown parameter the first question is undoubtedly a quite difficult one and is not addressed in the current work ; see for some related results for specific equations and further references .instead , we focus on the second question . thus , assuming that the regime of interaction between and is known , we want to estimate the unknown parameter at time , based on the continuously observed process up to this time . in order to do so, we will follow the maximum likelihood method .maximum likelihood estimation in multiscale diffusions with noise of order has been studied by different authors and under different settings , see for example .we also refer the reader to the manuscripts for general results on statistical estimation for diffusion processes .the novelty of the present paper stems from the fact that we address the problem of parameter estimation when both multiscale effects and small noise are present , for all three regimes in ( [ def : threepossibleregimes ] ) , which requires a different approach for the construction of maximum likelihood estimators . indeed , in , assuming that the noise is of order , the authors fit the data from the prelimit process to the log - likelihood function of the limiting process , i.e. , of the process to which converges to , as .however , when the diffusion coefficient vanishes in the limit , the limiting process is no longer the solution of an sde , but of an ode ( see theorem [ t : lln ] ) , thus it is deterministic and does not have a well defined likelihood .therefore , instead of working with the likelihood function of the limiting process , we work with the log - likelihood of the original multiscale model and we infer consistency and asymptotic normality ( under conditions as described below ) by studying its limit . in particular , under regime 1 with and under regimes and ( see ( [ def : threepossibleregimes ] ) ) , we prove that the maximum likelihood estimator ( mle ) is consistent and asymptotically normal under broad conditions .the situation of regime with is more complicated , because the original log - likelihood function does not have a well defined limit as , due to the terms .we address this issue by introducing a modified ( pseudo ) log - likelihood which is well defined in the limit .it turns out that the resulting pseudo mle is not consistent , however its `` bias '' can be computed exactly .this is a known problem in multiscale parameter estimation problems ; see section [ s : mle ] for some more details on this . in this article , by `` bias '' we mean the remainder term when we compute the limit of the estimator in probability , that is .the reason why we use quotes is because bias is usually defined as the remainder of the -limit of the estimator . under regime 1 with , we support our findings with a simulation study for a small noise diffusion in a two - scale potential field , a model of interest in the physical chemistry literature , . for this particular model, we can construct an estimator that is consistent and normal .the rest of the paper is organized as follows . in section [ s :prelim ] , we establish the necessary notation and we present the main ingredients and assumptions needed in the sequel . in section [ s : mle ] we discuss the maximum likelihood estimation problem for all three regimes . for regimes 2 and 3 andregime 1 when , we prove the consistency of the mle , studying the limit of the log - likelihood function , in section [ s : limitnglikelihood ] , whereas we prove a central limit theorem for the mle in section [ s : clt ] .finally , in section [ s : examples ] we study a particularly interesting case for regime , when ; a small noise diffusion in a two - scale potential field , we prove a central limit theorem for the pseudo mle in this particular setup and we present a simulated study illustrating the theoretical findings .we work with the canonical filtered probability space equipped with a filtration that satisfies the usual conditions , namely , is right continuous and contains all -negligible sets . regarding the sde ( [ eq : ldpanda1 ] ) we impose the following condition . [ a : assumption1 ] 1 .the parameter where is open , bounded and convex . also, the coefficients are lipschitz continuous in .2 . the functions are lipschitz continuous and bounded in both variables . moreover , they are periodic with period in the second variable in each direction . in the case of regime additionally assume that they are in and in with all partial derivatives continuous and globally bounded in and .the diffusion matrix is uniformly nondegenerate .for notational convenience we define the operator , where for two matrices ,b=[b_{ij}] ] for any .since , the limiting process is deterministic and weak convergence to constants implies convergence in probability , we obtain the claim of the theorem . also , due to our assumptions , the limiting ode s in ( [ eq : limitingode ] ) are well defined and have a unique solution in their corresponding regime .assume that we observe the process in continuous time and denote by the data we obtain .the log - likelihood function for estimating the parameter in the statistical model ( [ eq : ldpanda1 ] ) can be expressed as follows where we denote and for any positive definite matrix the notation used in ( [ eq : likelihoodfunction ] ) is slightly unusual and the brackets outside of the integral are the integrand variables .this notation is chosen for presentation purposes only , since if we used the arguments to each function in the stochastic integrals , this would result in long and complicated - looking formulas . sometimes , we will omit the subscript if . essentially , we define the likelihood function as the radon - nikodym derivative where is the measure for ( [ eq : ldpanda1 ] ) and the measure for ( [ eq : ldpanda1 ] ) when the drift term is equal to zero . therefore ,for fixed , we define the maximum likelihood estimator ( mle ) of to be the presence of the small parameters and complicate the estimation of significantly .our approach is to find the limiting likelihood ( in the appropriate sense ) for each regime , that is then , we prove consistency and derive asymptotic properties of the mle , by studying properties of the prelimiting log - likelihood and of the limiting log - likelihood .in particular , as we shall see in section [ s : averaging ] , based on the analysis of the log - likelihood function ( [ eq : likelihoodfunction ] ) we prove that the mle is a consistent estimator of the true value , under regime with and regimes and . under the same framework , we also prove , in section [ s : clt ] , that the mle is asymptotically normal .on the other hand , as we shall see in section [ s : homogenization ] , things get more complicated under regime when . in this case, the likelihood function ( [ eq : likelihoodfunction ] ) does not necessarily have a well defined limit due to the terms that are multiplied by ( recall that in this case as ) .we choose to resolve this issue , by taking the limit in an appropriately re - scaled and centered version of the original log - likelihood ( a pseudo log - likelihood ) . under certain conditions , this pseudo log - likelihood approach overcomes the convergence issue and a well defined limit exists .however , the pseudo maximum likelihood estimator is consistent , even though the `` bias '' is explicitly characterized .the consistency issue of the maximum likelihood estimation in the presence of `` unbounded drift terms '' , such as the term with is well known in the literature . in the context of and , which corresponds to regime , the problem has also been studied in under different scenarios and conditions and it is shown there that the maximum likelihood estimator is not consistent and one may need to result in sub - sampling of the data at appropriate rates in order to produce consistent estimators . in the case that has been studied in the issuewas treated with appropriate sub - sampling of the data .the article followed a semi - parametric approach assuming a special structure of the coefficients . in this workwe do not address the consistency issue .nevertheless , we provide an explicit formula for the asymptotic error in the transformed log - likelihood function .moreover , we apply our results to the case of small noise diffusion in a two - scale potential field , see section [ s : examples ] . in this case , even though , the original estimator is not consistent , we can construct a consistent estimator and also derive a central limit theorem for the proposed estimator .we first study the limiting likelihood for regime 1 when and for regimes 2 , 3 and then the proposed pseudo limiting likelihood for regime 1 when . in this section , we consider the limit of the likelihood function , defined by ( [ eq : likelihoodfunction ] ) for regimes 1 when and for regimes and .let us define the following functions [ def : threepossiblelikelihoods ] for , and for the three possible regimes defined in ( [ def : threepossibleregimes ] ) , define we then prove the following theorem [ t : likelihoodconvergence1 ] let the assumptions of theorem [ t : lln ] hold .let be a sample path of ( [ eq : ldpanda1 ] ) at .in the case of regime we assume that .then , under regime , the sequence converges in probability , uniformly in to from definition [ def : threepossiblelikelihoods ] and where is the solution to the corresponding limiting ode from theorem [ t : lln ] .in particular , for any =0.\ ] ]lastly , in each regime , the function is maximized at . here , , i.e. , the parameter value is .however , throughout this section we use the compact notation instead of for presentation purposes only , in order to simplify the formulas .since is a sample path of ( [ eq : ldpanda1 ] ) at , we get that where and then standard averaging principle for locally periodic diffusions , see chapter 3 of , and the fact that the corresponding invariant measure is continuous as a function of and theorem [ t : lln ] imply that for any moreover , the burkholder - davis - gundy inequality applied to the stochastic integral and condition [ a : assumption1 ] imply for some constant , uniformly in .thus , the proof of the claimed convergence follows by chebyschev s inequality and the uniform convergence in .the fact that the limit is maximized at is easily seen to hold by completing the square in the expressions for at definition [ def : threepossiblelikelihoods ] .for example , in the case of regime 1 , it is easy to see that and thus the maximum is easily seen to be attained at .similarly for regimes and . before we continue , we need to impose the following identifiability condition for the true value of the parameter .[ cond : identifiability ] for all , let . under condition [ cond :identifiability ] and the assumptions of theorem [ t : likelihoodconvergence1 ] , the mle sequence converges in probability to the true parameter .in particular , for any we have =0.\ ] ] for all , we have that \leq \mathbb{p}_{\theta_{0 } } \left [ \sup_{|u|>\eta } \left ( z_{u+\theta_{0}}^{\epsilon}(\mathcal{x}_{t } ) - z_{\theta_{0}}^{\epsilon}(\mathcal{x}_{t } ) \right ) \geq 0 \right]\\ \leq & \;\;\mathbb{p}_{\theta_{0 } } \biggl [ \sup_{|u|>\eta } \left ( \left ( z_{u+\theta_{0}}^{\epsilon}(\mathcal{x}_{t } ) - z_{\theta_{0}}^{\epsilon}(\mathcal{x}_{t } ) \right ) - \left ( \bar{z}_{u+\theta_{0}}^{i}\left(\bar{x}^{i}_{\cdot}\right ) - \bar{z}_{\theta_{0}}^{i}\left(\bar{x}^{i}_{\cdot}\right ) \right ) \right ) \\ &\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \geq -\sup_{|u|>\eta } \left(\bar{z}^{i}_{\theta_{0}+u } \left(\bar{x}^{i}_{\cdot}\right ) - \bar{z}^{i}_{\theta_{0}}\left(\bar{x}^{i}_{\cdot}\right)\right ) \biggr].\end{aligned}\ ] ] condition [ cond : identifiability ] gives that \\ \leq & \;\;\mathbb{p}_{\theta_{0 } } \left [ \sup_{|u|>\eta } \left ( \left ( z_{u+\theta_{0}}^{\epsilon}(\mathcal{x}_{t } ) - \bar{z}_{u+\theta_{0}}^{i}\left(\bar{x}^{i}_{\cdot}\right ) \right ) - \left ( z_{\theta_{0}}^{\epsilon}(\mathcal{x}_{t } ) - \bar{z}_{\theta_{0}}^{i}\left(\bar{x}^{i}_{\cdot}\right ) \right ) \right ) \geq \eta > 0 \right].\end{aligned}\ ] ] therefore , by conditioning on we have \leq \;\ ; \mathbb{p}_{\theta_{0 } } \left[\sup_{|u|>\eta } \left ( z_{u+\theta_{0}}^{\epsilon } - \bar{z}_{u+\theta_{0}}^{i}\right ) \geq\frac{1}{2}\eta > 0 \right ] + \mathbb{p}_{\theta_{0 } } \left [ \left| z_{\theta_{0}}^{\epsilon } - \bar{z}_{\theta_{0}}^{i}\right| \geq \frac{1}{2}\eta > 0 \right].\ ] ] the result follows by the uniform convergence of theorem [ t : likelihoodconvergence1 ] . in the case of regime with ,the situation is more involved because the limit of the log - likelihood by ( [ eq : likelihoodfunction ] ) is not well defined .this is due to the and terms that appear in the expression of .this leads us to re - parameterize the log - likelihood , so that it will have a well defined limit .however , we need to re - parameterize the log - likelihood in such a way so that the limiting expression will coincide with the expression of section [ s : averaging ] for and at the same time maintain tractability and simplicity .let us denote by the log - likelihood function ( [ eq : likelihoodfunction ] ) with .we define the modified log - likelihood function to characterize the limit , we first need to define several quantities .but first we impose an additional assumption .[ a : assumption4 ] let the coefficients and be such that for all and for all . for example , condition [ a : assumption4 ] is trivially satisfied under condition [ a : assumption2 ] if the coefficients and are independent of ( see also remark [ r : remarkregime1 ] below ) .then , we can consider the auxiliary partial differential equation under condition [ a : assumption4 ] , this poisson equation has a unique bounded , periodic in and smooth solution ( see theorem 3.3.4 of ) . in order to emphasize the dependence of on , we shall often write . next , we define and for the limiting distribution we prove the following theorem [ t : likelihoodconvergence2 ] let conditions [ a : assumption1 ] , [ a : assumption2 ] and [ a : assumption4 ] hold and consider regime .let be a sample path of ( [ eq : ldpanda1 ] ) at .then , the sequence , as defined by ( [ eq : regime1b ] ) , converges in probability , uniformly in to , where in particular , for any =0.\ ] ] before proceeding with the proof of the theorem , we make two remarks . when we get that the `` bias '' ( since in this case ) , and we get back the result of theorem [ t : likelihoodconvergence1 ] .the term is maximized at as in theorem [ t : likelihoodconvergence1 ] . however, this is not true in general for .this implies that maximum likelihood in general fails for regime .[ r : remarkregime1 ] when condition [ a : assumption4 ] is not satisfied , the situation is more complicated . using the modified log - likelihood ( [ eq : regime1b ] ) , condition [ a : assumption4 ] is necessary in order for ( [ eq : poissonequation ] ) to have a solution .this follows by fredholm alternative as in theorem 3.3.4 of .the use of the poisson equation ( [ eq : poissonequation ] ) is an essential tool in the proof of theorem [ t : likelihoodconvergence2 ] .there does not seem to be an obvious way to reparameterize the likelihood in such a way that it will have a well defined limit and at the same time maintain tractability .however , as we shall see in section [ s : examples ] , theorem [ t : likelihoodconvergence2 ] covers one of the cases of interest which is the first order langevin equation with a two scale potential . to be more precise, it covers the case of a small noise diffusion in two - scale potentials of the form ( [ eq : ldpanda1 ] ) with , and . after some term rearrangement , we get \left(x_{s},\frac{x_{s}}{\delta}\right)ds\nonumber\\ & & \hspace{0.2cm}+\frac{\epsilon}{\delta}\int_{0}^{t}\left < b_{\theta_{0}},c_{\theta}\right>_{\alpha}\left(x_{s},\frac{x_{s}}{\delta}\right)ds\nonumber\\ & & \hspace{0.2cm}+\int_{0}^{t}\left[\left < c_{\theta},c_{\theta_{0}}\right>_{\alpha}-\frac{1}{2}\left\vert c_{\theta}\right\vert_{\alpha}^{2}\right]\left(x_{s},\frac{x_{s}}{\delta}\right)ds+\nonumber\\ & & \hspace{0.2cm}+\frac{\delta}{\epsilon}\int_{0}^{t}\left[\left < b_{\theta},c_{\theta_{0}}\right>_{\alpha}+\left < b_{\theta_{0}},c_{\theta}\right>_{\alpha}-\left < b_{\theta},c_{\theta}\right>_{\alpha}\right]\left(x_{s},\frac{x_{s}}{\delta}\right)ds\nonumber\\ & & \hspace{0.2cm}+\left(\frac{\delta}{\epsilon}\right)^{2}\int_{0}^{t}\left[\left < c_{\theta},c_{\theta_{0}}\right>_{\alpha}-\frac{1}{2}\left\vert c_{\theta}\right\vert_{\alpha}^{2}\right]\left(x_{s},\frac{x_{s}}{\delta}\right)ds+ \nonumber\\ & & + \hspace{0.2cm}\sqrt{\epsilon}\left[\frac{\delta}{\epsilon}\int_{0}^{t}\left < b_{\theta},\sigma dw_{s}\right>_{\alpha}\left(x_{s},\frac{x_{s}}{\delta}\right)+\left(\left(\frac{\delta}{\epsilon}\right)^{2}+1\right)\int_{0}^{t}\left < c_{\theta},\sigma dw_{s}\right>_{\alpha}\left(x_{s},\frac{x_{s}}{\delta}\right)\right]\nonumber\\ & = & k_{1}^{\epsilon}+ \frac{\epsilon}{\delta}k_{2}^{\epsilon } + k_{3}^{\epsilon}+\frac{\delta}{\epsilon}k_{4}^{\epsilon}+\left(\frac{\delta}{\epsilon}\right)^{2}k_{5}^{\epsilon}+\sqrt{\epsilon}m^{\epsilon}_{t}. \label{eq : likelihood1}\end{aligned}\ ] ] we study the limiting behavior of the terms in the right hand side of ( [ eq : likelihood1 ] ) .it is relatively easy to see that the converges to zero in the p - th mean for every .moreover , the quadratic variation of the stochastic integral in ( [ eq : likelihood1 ] ) has a well defined limit in p - th mean , which together with the fact that it is multiplied by , gives us that this term on the right hand side of ( [ eq : likelihood1 ] ) converges to zero in p - th mean . therefore it remains to study the terms , and . by standard averaging principle for locally periodic diffusions, it can be seen that converges in probability , uniformly in to ; see for example .lastly , we need to study the term . for this purposewe apply it formula to that satisfies ( [ eq : poissonequation ] ) with to get \nonumber\\ & & + \left[\frac{\epsilon}{\delta}\left < b_{\theta_{0}},\nabla_{x}\phi \right>+\left< c_{\theta_{0}},\nabla_{x}\phi\right>+ \frac{\epsilon}{2}\sigma\sigma^{t}:\nabla_{x}\nabla_{x}\phi+\frac{\epsilon}{\delta}\sigma\sigma^{t}:\nabla_{x}\nabla_{y}\phi\right]dt\nonumber\\ & & + \frac{\sqrt{\epsilon}}{\delta}\left<\nabla_{y}\phi,\sigma dw_{t}\right>+\sqrt{\epsilon}\left<\nabla_{x}\phi,\sigma dw_{t}\right>.\label{eq : itoformula1}\end{aligned}\ ] ] hence , recalling that satisfies ( [ eq : poissonequation ] ) , which has a unique , periodic in , bounded and smooth solution due to condition [ a : assumption4 ] , we obtain \left(x_{s}^{\epsilon},\frac{x_{s}^{\epsilon}}{\delta}\right)ds\nonumber\\ & & + \sqrt{\epsilon}\int_{0}^{t}\left<\nabla_{y}\phi,\sigma dw_{s}\right>\left(x_{s}^{\epsilon},\frac{x_{s}^{\epsilon}}{\delta}\right)+\sqrt{\epsilon}\delta\int_{0}^{t}\left<\nabla_{x}\phi,\sigma dw_{s}\right>\left(x_{s}^{\epsilon},\frac{x_{s}^{\epsilon}}{\delta}\right)\nonumber\\ & & + \int_{0}^{t}\left <c_{\theta_{0}},\nabla_{y}\phi\right>\left(x_{s}^{\epsilon},\frac{x_{s}^{\epsilon}}{\delta}\right ) ds.\label{eq : itoformula2}\end{aligned}\ ] ] from this statement the result follows immediately since the last term converges in probability , uniformly in , to .the rest of the terms on the right hand side of the last display converge to zero in probability , uniformly in , due to the boundedness of and its derivatives and condition [ a : assumption1 ] .this concludes the proof of the theorem .in this section we state and prove a central limit theorem ( clt ) for the maximum likelihood estimator of in the case of subsection [ s : averaging ] .the main structural assumption is that under regime 1 we have that . for notational convenience and without loss of generality , we then consider that for all three regimes. for regime 2 when one essentially just replaces in the final formula , the function , by the function .so , without loss of generality , let us assume that for all three regimes .we define the normed log - likelihood ratio with probability we can write that for notational convenience , we also define the quantities the fisher information matrix is defined to be to this end we recall that the invariant measure has , under our assumptions , a smooth , uniformly bounded away from zero density , which is also periodic in the ( theorem 3.3.4 and section 3.6.2 in ) for regimes 1 and 2 and condition 2.3 for regime 3. we will denote by the density of , namely . in this sectionthe following condition is imposed .[ a : assumptionclt ] 1 .the function is twice continuously differentiable in with bounded derivatives .the fisher information matrix is positive definite uniformly in , i.e. there exists such that 3 .the vector process \right\} ] on and on in the point .the vector valued function is lipschitz continuous in with a lipschitz constant that is uniformly bounded in .we then have the following theorem .[ t : clt ] let the conditions of theorem [ t : lln ] and condition [ a : assumptionclt ] hold .consider regime and let be the maximum likelihood estimator of .then , uniformly on compacts we have that in distribution under , the following central limit result holds ^{1/2}\left(\hat{\theta}^{\epsilon}-\theta\right)\rightarrow n(0,i).\ ] ] moreover , the mle has converging moments for all , i.e. , where is a standard random vector .the proof of this theorem follows by theorem 1.6 in chapter of ) .the lemmas [ l : clt_condition1 ] , [ l : clt_condition2 ] and [ l : clt_condition3 ] prove that the conditions of that theorem hold .for notational convenience we omit writing the subscript , which denotes the particular regime under consideration .[ l : clt_condition1 ] under the conditions of theorem [ t : clt ] , the family is uniformly asymptotically normal with normalizing matrix .the proof of this lemma is presented in the appendix .[ l : clt_condition2 ] under the conditions of theorem [ t : clt ] , there exists and a constant such that for every and compact the proof of this lemma is presented in the appendix .[ l : clt_condition3 ] under the conditions of theorem [ t : clt ] , and for any and compact there exists a function with the property such that the proof of this lemma is presented in the appendix .a particular model of interest is the first order langevin equation where is some potential function and the diffusion constant .we are particularly interested in the case where the potential function is composed of a large - scale smooth part and a fast oscillating part of smaller magnitude : thus the equation of interest can be written as dt+\sqrt{\epsilon}\sqrt{2d}dw_{t},\hspace{0.2cm}x_{0}^{\epsilon } = x_{0 } , \label{eq : langevinequation2}\ ] ] an example of such a potential is given in figure 1 .[ f : figure1 ] for the potential function drawn in figure [ f : figure1 ] , the unkown parameter corresponds to the curvature of around the equilibrium point .we are interested in the statistical estimation problem for the parameter in the case of regime , i.e. , when .in subsection [ ss : langevinequationmle ] we study the estimation problem for based on the methodology described in subsection [ s : homogenization ] . in subsection [ ss : langevinequationclt ] we study the corresponding central limit theorem . in subsection[ ss : langevinequationsimulation ] we present a simulation study . to connect to our notationlet , , and we consider regime 1 . in this casethere is an explicit formula for the invariant density , which is the gibbs distribution moreover , it is easy to see that the centering conditions [ a : assumption2 ] and [ a : assumption4 ] hold .notice that in this case the invariant measure does not depend neither on , nor on .we also define we have the following proposition . [ p : langevin ] under the conditions and notation of theorem [ t : likelihoodconvergence2 ] we have that the error term is given by in the case , and , we notice that the solution to the poisson equation ( [ eq : poissonequation ] ) is related to the solution of the cell problem , ( [ eq : cellproblem ] ) , via the relation hence , we have that this concludes the proof of the proposition .when we have a separable fluctuating part , i.e. , everything can be calculated explicitly .we summarize the results in the following corollary .this corollary also shows that in this case we can derive a consistent estimator in closed form .[ c : corol ] assume and consider regime .under the conditions and notation of theorem [ t : likelihoodconvergence2 ] , we have that the error term is given by where is the ( common ) period of the functions in the corresponding direction , \ ] ] and for moreover , we have that .furthermore , recall the mle .if for both and , then converges in probability to , i.e. , is a consistent estimator of .the separability assumption of gives us plugging that into ( [ eq : errorlangevingeneral ] ) we immediately get the simplified representation of the error term .the second claim follows from hlder inequality .indeed , it is easy to see that .therefore , we obtain . next , we maximize the limiting log - likelihood function . by straightforward substitution to ( [ eq : maintermregime1general ] )we see that we collect things together and write then , it is easy to see that this quantity is maximized for then , using theorem [ t : likelihoodconvergence2 ] we obtain the statement of the theorem . in this section ,we prove a central limit for the maximum likelihood estimator of the first order langevin equation ( [ eq : langevinequation2 ] ) .based on the modified log likelihood function ( i.e. , on ( [ eq : likelihood1 ] ) ) , the maximum likelihood estimator can be written some algebra manipulation in ( [ eq : modmle ] ) gives us we have the following theorem assume condition [ a : assumption1 ] .consider the first order langevin equation ( [ eq : langevinequation2 ] ) and assume regime .let be the maximum likelihood estimator of based on the modified log likelihood function .then , we have that in distribution under , the following central limit result holds moreover , assuming , we also have that in probability which is ( [ eq : langevinequationestimator ] ) .the first statement follows directly from the representation of the maximum likelihood estimator in ( [ eq : mle_langevin ] ) and the central limit theorem for stochastic integrals , see for example lemma 1.8 in chapter i of .the second statement is as follows .consider the unique , bounded and periodic in smooth solution of the auxiliary problem by applying it formula to , ( compare with ( [ eq : itoformula1 ] ) and ( [ eq : itoformula2 ] ) ) we get where the term converges to zero in probability as .therefore , by substituting we obtain that then , as in proposition [ p : langevin ] and corollary [ c : corol ] we can solve the auxiliary pde ( [ eq : poissonequationlangevin ] ) in closed form and obtain the statement of the theorem .we apply our results in the case when and . as we discussed in the previous sections , in this case we need to work with the modified likelihood , since . as we proved in proposition 6.1 due to the separability of , we can obtain a consistent estimator when properly normalized .we start by simulating the model .we use an euler discretization scheme for the multiscale diffusion as follows - \theta x_{t_{k}}^{\epsilon } \right\ } ( t_{k+1 } - t_{k } ) + \sqrt{\epsilon } \sqrt{2d } \left ( w_{t_{k+1 } } - w_{t_{k } } \right),\ ] ] where , is the number of simulated values . for the simulated data we choose and . from , we have that the euler scheme is bounded above by , where we denote by the discretization step . therefore ,if we want an error of order 0.001 , we need to choose the discretization step to be equal to .for the simulation procedure , we choose and .the maximum likelihood procedure consists of constructing the pseudo log - likelihood function ( 4.9 ) .more specifically , \\ & & - \theta\ ; \frac{1}{\sigma^{2 } } \left [ \int_{0}^{t } \nabla v(x_{s})dx_{s } \left ( 1 + \left(\frac{\delta}{\epsilon}\right)^{2 } \right ) + \frac{\delta}{\epsilon } \int_{0}^{t } \nabla q\left(\frac{x_{s}}{\delta}\right ) \nabla v(x_{s } ) ds \right ] + const,\end{aligned}\ ] ] where and is a quantity that is independent of the parameter .the maximizer of this quantity computes as }{- \int_{0}^{t } \left(\nabla v(x_{s})^{2 } ds\right ) \left ( \left(\frac{\delta}{\epsilon}\right)^{2 } + 1 \right ) } .\ ] ] although our model is continuous as well as our mle , in practice we obtain data in discrete time .therefore , we need to discretize our estimator in order to implement it .we directly discretize the stochastic integrals and we obtain }{- \sum_{i=1}^{n-1 } \left(\nabla v(x_{s_i})^{2 } ( s_{i+1}-s_i)\right ) \left ( \left(\frac{\delta}{\epsilon}\right)^{2 } + 1 \right ) } .\ ] ] the consistent estimator will be the normalized .the normalizing term equals , with and as defined in corollary ( [ c : corol ] ) .it is important to mention here that we do not simplify the stochastic integral using it s lemma .the reason is that in order to compute the estimator for we need to use just the observations we have available .if we use it , then the integral with respect to brownian motion that appears contains a process ( the brownian motion ) that is not observed . using simulated data ,we construct the mle for different values of the true parameter .the results are summarized in table [ t : table ] , along with the corresponding 68% and 95% confidence intervals .these are both empirical intervals meaning that we repeat the procedure ( simulation estimation ) several times ( m=100 ) .then , we obtain the monte carlo estimator for as the average of all estimators , as well as the monte carlo standard deviation that we use in the construction of the intervals .[ t : table ] .estimated values of and the corresponding empirical 68% and 95% confidence intervals for various true parameters . [ cols="^,^,^,^",options="header " , ] for , we plot ( figure [ f : figure2 ] ) the histogram of the empirical distribution that we obtain from the monte carlo procedure . for comparison , on the same graph we also plot the corresponding density curve of the theoretical asymptotic ( normal ) distribution with the appropriate variance as the one we computed in theorem 6.3 .[ f : figure2 ] for the simulated dataset and the corresponding density ( theoretical ) curve from theorem 6.3.,width=10,height=8 ]in this paper we studied the parameter estimation problem for diffusion processes with multiple scales and vanishing noise . under certain conditions , we derived consistent estimators and proved the related central limit theorems .the theoretical results are supported by a simulation study of the first order langevin equation in a rough potential .such results are useful when one is interested in parameter estimation of dynamical systems with more than one scales ( e.g. , in rough potentials ) perturbed by small noise .let us denote , where .we assume that belongs in a compact subset of , denoted by , and let as .we start by rewriting the normed likelihood ratio as follows -\frac{1}{2}\left(u_{\epsilon},u_{\epsilon}\right)\nonumber\\ & = & j^{\epsilon}_{1}(\theta_{\epsilon})+j^{\epsilon}_{2}(\theta_{\epsilon})+j^{\epsilon}_{3}(\theta_{\epsilon})+j^{\epsilon}_{4}.\nonumber\end{aligned}\ ] ] the last line of the previous computation is easily seen to hold by the following chain of identities which are applied for .the goal is to prove that , where is distributed as normal and as in probability uniformly in .this , will establish that the family is uniformly asymptotically normal with normalizing matrix , which then proves the lemma .moreover , due to averaging and the law of large numbers result theorem [ t : lln ] , the definition of the fisher information matrix implies that converges in distribution with respect to , uniformly in , to where is distributed as , as . then we can write \left(x^{\epsilon}_{s},\frac{x^{\epsilon}_{s}}{\delta}\right)\right\vert^{2}ds\nonumber\\ \leq\ ; & |i^{-1/2}(\theta_{\epsilon , u})u_{\epsilon}|^{2 } \sup_{\theta\in\tilde{\theta}}\sup_{|v|\leq c\sqrt{\epsilon } } \mathbb{e}\left|\int_{0}^{t}\left\vert\nabla_{\theta}c_{\theta+v}-\nabla_{\theta}c_{\theta}\right\vert^{2}_{\alpha}\left(x^{\epsilon}_{s},\frac{x^{\epsilon}_{s}}{\delta}\right)ds\right|\nonumber\\ \leq\ ; & c \sup_{\theta\in\tilde{\theta}}\sup_{|v|\leq c\sqrt{\epsilon } } \mathbb{e}\left|\int_{0}^{t}\left\vert\nabla_{\theta}c_{\theta+v}-\nabla_{\theta}c_{\theta}\right\vert^{2}_{\alpha}\left(x^{\epsilon}_{s},\frac{x^{\epsilon}_{s}}{\delta}\right)ds\right|\rightarrow 0 , \textrm { as } \epsilon\downarrow 0.\label{eq : j3_term1}\end{aligned}\ ] ] the last convergence is true due to the uniform continuity of in and tightness of . using it isometry , the last display implies that , it remains to consider the term .notice that standard averaging principle , the convergence of to as by theorem [ t : lln ] , and the continuous dependence of the involved functions on , imply that , \right|\rightarrow 0 , \textrm { as } \epsilon\downarrow 0 .\label{eq : j3_term2}\ ] ] by ( [ eq : j3_term1])-([eq : j3_term2 ] ) and the assumptions on the dependence on we obtain that the proof follows along the lines of lemma 2.3 in .we review it here for completeness and mention the required modifications in order to account for the extra component of averaging .let and define the interpolating point \ ] ] by an absolutely continuous change of measure we have where , we have defined .then , we write where thus , we obtain \ell\nonumber\\ & = \epsilon^{-m } ( 2m)^{-2 m } \int_{0}^{1}\mathbb{e}_{\theta_{2}}\left|\int_{0}^{t}\left<\nabla_{\theta}c_{\theta(\ell)}(\theta_{2}-\theta_{1}),\sigma dw_{s}\right>_{\alpha}\left(x^{\epsilon}_{s},\frac{x^{\epsilon}_{s}}{\delta}\right ) \right|^{2m}d\ell\nonumber\\ & \leq \epsilon^{-m } c_{m , t } \int_{0}^{1}\mathbb{e}_{\theta_{2}}\left[\int_{0}^{t}\left<\nabla_{\theta}c_{\theta(\ell)}\left(x^{\epsilon}_{s},\frac{x^{\epsilon}_{s}}{\delta}\right),(\theta_{2}-\theta_{1})\right>_{\alpha}^{2m}ds\right]d\ell\nonumber\\ & \leq \epsilon^{-m } c_{m , t } |\theta_{2}-\theta_{1}|^{2m}\sup_{\theta_{2},\theta\in\tilde{\theta}}\mathbb{e}_{\theta_{2}}\left[\int_{0}^{t}\left\vert\nabla_{\theta}c_{\theta}\right\vert_{\alpha}^{2m}\left(x^{\epsilon}_{s},\frac{x^{\epsilon}_{s}}{\delta}\right)ds\right]\nonumber\\ & \leq \epsilon^{-m } \phi^{2m}(\epsilon,\theta ) c_{m , t } |u_{2}-u_{1}|^{2m}\sup_{\theta_{2},\theta\in\tilde{\theta}}\mathbb{e}_{\theta_{2}}\left[\int_{0}^{t}\left\vert\nabla_{\theta}c_{\theta}\right\vert_{\alpha}^{2m}\left(x^{\epsilon}_{s},\frac{x^{\epsilon}_{s}}{\delta}\right)ds\right]\nonumber\\ & \leq i^{-m}(\theta ) c_{m , t } |u_{2}-u_{1}|^{2m}\sup_{\theta_{2},\theta\in\tilde{\theta}}\mathbb{e}_{\theta_{2}}\left[\int_{0}^{t}\left\vert\nabla_{\theta}c_{\theta}\right\vert_{\alpha}^{2m}\left(x^{\epsilon}_{s},\frac{x^{\epsilon}_{s}}{\delta}\right)ds\right]\nonumber\end{aligned}\ ] ] and the result follows by the assumed uniform boundedness of . in the absence of multiple scales , this is lemma 2.4 in .here we provide the proof of the result with the additional component of multiple scales , which makes the analysis more involved .for the sake of concreteness we only present the proof for the case of regime .the required changes for the other regimes are minimal and are mentioned below at the appropriate place .recall that and set \ ] ] we can then write \nonumber\\ & \leq \left(\mathbb{e}_{\theta}e^{-p_1\cdot \frac{p - q}{2}\int_{0}^{t}\left\vert \delta c_{\theta}\right\vert_{\alpha}^{2}\left(x^{\epsilon}_{s},\frac{x^{\epsilon}_{s}}{\delta}\right)ds}\right)^{1/p_{1}}\times \nonumber\\ & \quad\times \left(\mathbb{e}_{\theta}\left[e^{pp_{2}\int_{0}^{t}\left<\delta c_{\theta},\sigma dw_{s}\right>_{\alpha}\left(x^{\epsilon}_{s},\frac{x^{\epsilon}_{s}}{\delta}\right)-\frac{qp_{2}}{2}\int_{0}^{t}\left\vert \delta c_{\theta}\right\vert_{\alpha}^{2}\left(x^{\epsilon}_{s},\frac{x^{\epsilon}_{s}}{\delta}\right)ds}\right]\right)^{1/p_{2}}\end{aligned}\ ] ] choosing now , we have that \leq 1\ ] ] setting , this implies so , the next step is to appropriately bound from above the term . at this point, we recall the definition of from ( [ eq : definition_q ] ) and we write define the operator and for , let satisfy the auxiliary pde comparing with the case without the multiple scales , the additional difficulty here is the presence of the fast oscillating component , .the consideration of the solution to this auxiliary pde , allows us to reduce the bound for the quantity at hand to a bound for a quantity that depends only on the slow component , .notice that is the operator for regime defined in definition [ def : threepossibleoperators ] with . for regimes 2 and 3 ,one would need to consider the solution to the pde governed by the corresponding operators from definition [ def : threepossibleoperators ] . since , fredholm alternative , theorem 3.3.4 of guarantees that there exists a unique , smooth , periodic in and bounded solution to the aforementioned auxiliary pde for .the boundedness of and the imposed conditions on also guarantee that is bounded uniformly in .let us apply it formula to with .it formula gives an expression similar to ( [ eq : itoformula1 ] ) and after some term rearrangement , we get for that = \int_{0}^{t}\mathcal{l}_{x^{\epsilon}_{s}}\phi\left(x^{\epsilon}_{s},\frac{x^{\epsilon}_{s}}{\delta}\right)ds\nonumber\\ & \quad= ( \delta^{2}/\epsilon)\left(\phi\left(x^{\epsilon}_{t},\frac{x^{\epsilon}_{t}}{\delta}\right)-\phi\left(x^{\epsilon}_{0},\frac{x^{\epsilon}_{0}}{\delta}\right)\right)\nonumber\\ & \quad -\int_{0}^{t}\left[\frac{\delta}{\epsilon}\left <c_{\theta},\nabla_{y}\phi\right>+ \frac{\delta^{2}}{\epsilon}\left < c_{\theta},\nabla_{x}\phi\right > + \frac{\delta^{2}}{2}\sigma\sigma^{t}:\nabla_{x}\nabla_{x}\phi+\delta\sigma\sigma^{t}:\nabla_{x}\nabla_{y}\phi\right]\left(x^{\epsilon}_{s},\frac{x^{\epsilon}_{s}}{\delta}\right)ds\nonumber\\ & \quad - \frac{\delta}{\sqrt{\epsilon}}\int_{0}^{t}\left<\nabla_{y}\phi,\sigma dw_{s}\right>\left(x^{\epsilon}_{s},\frac{x^{\epsilon}_{s}}{\delta}\right)- \frac{\delta^{2}}{\sqrt{\epsilon}}\int_{0}^{t}\left<\nabla_{x}\phi,\sigma dw_{s}\right>\left(x^{\epsilon}_{s},\frac{x^{\epsilon}_{s}}{\delta}\right)\label{eq : pde_clt2}\end{aligned}\ ] ] due to the boundedness of the involved functions , the last display gives us the existence of a constant that may depend on ( but not on ) , such that \right|\leq c \left(1+\sup_{t\in[0,t]}|w_{t}|\right ) \label{eq : crucialtermtobound1}\end{aligned}\ ] ] these computations , allow us to continue the right hand side of ( [ eq : crucialtermtobound ] ) as follows \right]}\times\right.\nonumber\\ & \quad\qquad\left.\times e^{-\gamma\int_{0}^{t}\left(v , q_{\epsilon , v}^{1/2}\left(x^{\epsilon}_{s},\theta\right)\right)^{2}ds}\right\}\nonumber\\ & \leq \left(\mathbb{e}_{\theta}e^{-\gamma p_{3}\int_{0}^{t}\left(v , q_{\epsilon , v}^{1/2}\left(x^{\epsilon}_{s},\theta\right)\right)^{2}ds}\right)^{1/p_{3}}\times\nonumber\\ & \quad \times \left(\mathbb{e}_{\theta}e^{-\gamma q_{3}\left[\int_{0}^{t}\left[\left\vert \delta c_{\theta}\left(x^{\epsilon}_{s},\frac{x^{\epsilon}_{s}}{\delta}\right)\right\vert^{2}_{\alpha}-\left(v , q_{\epsilon , v}^{1/2}\left(x^{\epsilon}_{s},\theta\right)\right)^{2}\right]ds\right]}\right)^{1/q_{3}}\nonumber\\ & \leq \left(\mathbb{e}_{\theta}e^{-\gamma p_{3}\int_{0}^{t}\left(v , q_{\epsilon , v}^{1/2}\left(x^{\epsilon}_{s},\theta\right)\right)^{2}ds}\right)^{1/p_{3 } } \left(\mathbb{e}e^{\gamma c q_{3}\left(1+\sup_{t\in[0,t]}|w_{t}|\right)}\right)^{1/q_{3 } } \label{eq : crucialtermtobound2}\end{aligned}\ ] ] where , the first inequality in the last computation used hlder inequality with and the second inequality used ( [ eq : crucialtermtobound1 ] ) .so , we now need to focus on the term .define the vector valued function and notice that using the trivial inequality , applied with and we can write dy\end{aligned}\ ] ] hence , we obtain the bound \label{eq : boundforterm_0}\end{aligned}\ ] ] so , as we will have the assumed uniform boundedness of , the fact that is a density and the lower bound from ( [ eq : boundforterm2.3_0 ] ) mean that there exist constants that may depend on such that moreover , by cauchy - schwartz inequality , we also have that } \left\vert x^{\epsilon}_{t}-\bar{x}_{t}\right\vert^{2}\label{eq : boundforterm2.3_2a}\end{aligned}\ ] ] to derive the inequality before the last one , we used the lipschitz continuity in of the function , with a lipschitz constant that may depend on .to continue , we need to bound from above the quantity } \left\vert x^{\epsilon}_{t}-\bar{x}_{t}\right\vert^{2} ] can be treated . by considering the solution to an auxiliary pde problem analogous to ( [ eq : pde_clt ] ) with right hand side replaced by , we get ( similarly to ( [ eq : pde_clt2 ] ) ) that \right\vert\leq c_{6}\left(1+\frac{\delta}{\sqrt{\epsilon } } \sup_{s\in[0,t]}\left\vert w_{s}\right\vert\right)\ ] ] for some constant that may depend on . thus putting things together , ( [ eq : boundforterm2.3_2 ] ) takes the form }\left\vert w_{s}\right\vert^{2 } \right\ } \label{eq : boundforterm2.3_3}\end{aligned}\ ] ] and by grownwall inequality, we can conclude that there exists a constant , that may depend on , such that }\left\vert x^{\epsilon}_{t}-\bar{x}_{t}\right\vert & \leq c_{8}\sqrt{\epsilon+\frac{\delta^{2}}{\epsilon}}\sup_{t\in[0,t]}\left\vert w_{t}\right\vert \label{eq : boundforterm2.3_4a}\end{aligned}\ ] ] coming back to ( [ eq : boundforterm2.3_2a ] ) , we have obtained }\left\vert w_{t}\right\vert\label{eq : boundforterm2.3_4}\end{aligned}\ ] ] set .putting ( [ eq : boundforterm2.3_0 ] ) and ( [ eq : boundforterm2.3_4 ] ) together and recalling that , the bound ( [ eq : boundforterm_0 ] ) becomes }\left\vert w_{t}\right\vert}\right]\nonumber\\ & \leq e^{-\gamma p_{3}c_{2}\left\vert u \right\vert^{2 } } \left [ 1 + 4\gamma p_{3 } c_{9 } \left\vert u\right\vert \sqrt{8\pi t } e^{8\gamma^{2}\left(p_{3}c_{9}t\right)^{2 } \left\vert u\right\vert^{2}}\right ] , \label{eq : boundforterm_1}\end{aligned}\ ] ] where the last inequality used lemma 1.14 by kutoyants , .now , we have all the necessary ingredients in order to continue the bound of ( [ eq : crucialtermtobound ] ) . in particular , using ( [ eq : crucialtermtobound2 ] ) , ( [ eq : crucialtermtobound ] ) gives }|w_{t}|\right)}\right)^{(q - p^{2})/(pq_{3})}\label{eq : crucialtermtobound4}\end{aligned}\ ] ] choosing such that and using the inequality , we then obtain from ( [ eq : boundforterm_1 ] ) ^{(q - p^{2})/(pp_{3})}\nonumber\\ & \leq e^{-\frac{c_{2}}{2}\left\vert u \right\vert^{2 } + \frac{q - p^{2}}{p}4\gamma c_{9 } \left\vert u\right\vert \sqrt{8\pi t } } \label{eq : crucialtermtobound5}\end{aligned}\ ] ] so , ( [ eq : crucialtermtobound4 ] ) and ( [ eq : crucialtermtobound5 ] ) give }|w_{t}|\right)}\right)^{(q - p^{2})/(pq_{3})}\label{eq : crucialtermtobound6}\end{aligned}\ ] ] the right hand side of the last inequality defines our function , which certainly enjoys the property this concludes the proof of the lemma .y. ait - sahalia , p.a .mykland and l. zhang , ( 2005 ) , a tale of two time scales : determining integrated volatility with noise high - frequency data , _ journal of american statistical association _ , 100 , pp .13941411 .a. j. majda , c. franzke , and b. khouider , ( 2008 ) , an applied mathematics perspective on stochastic modelling for climate ._ philosophical transactions of the royal society a : mathematical , physical and engineering sciences _ , 366 ( 1875 ) , pp . 24272453 . | we study the problem of parameter estimation for stochastic differential equations with small noise and fast oscillating parameters . depending on how fast the intensity of the noise goes to zero relative to the homogenization parameter , we consider three different regimes . for each regime , we construct the maximum likelihood estimator and we study its consistency and asymptotic normality properties . a simulation study for the first order langevin equation with a two scale potential is also provided . maximum likelihood estimation for small noise multiscale diffusions konstantinos spiliopoulos ^1^ , alexandra chronopoulou ^2^ ^1^department of mathematics & statistics , boston university + 111 cummington street , boston ma 02215 , e - mail : kspiliop.bu.edu ^2^department of statistics and applied probability , university of california , santa barbara + santa barbara , ca 93106 - 3110 , e - mail : chronopoulou.ucsb.edu * keywords : * parameter estimation , central limit theorem , multiscale diffusions , dynamical systems , rough energy landscapes . * msc : * 62m05 , 62m86 , 60f05 , 60g99 * acknowledgement : * the authors would like to thank the anonymous reviewer for pointing out a gap in the proof of theorem 5.1 in the original article , as well as all comments that lead to a significant improvement of the article . k.s . was partially supported , during revisions of this article , by the national science foundation ( dms 1312124 ) . |
i would like to thank benot and paolo for their very competent supervision of this master thesis .river landscapes exhibit many different forms in all climatic regions of the world . nevertheless , one is also able to observe common features which often can be found to be the product of specific water and sediment interaction (see for example and references therein ) .the science studying the coupled water and sediment dynamics to explain formation and alteration of river courses is called river morphodynamics and includes research on mainly longitudinal structures like long sediment waves and more complex 2-dimensional structures , among them are alternate and multiple bars ( see figure [ fig : bars ] ) . in the past , these research areas have been investigated using experimental setups ( see for bar experiments for example ) , numerical simulations ( see for example ) and theoretical approaches based on linear stability analysis ( , and ) .linear stability analysis is a concept that allows to study the asymptotic fate ( ) of a linear or linearized system which is slightly perturbated away from a spatially homogeneous solution .the method was originally developed by and has been applied frequently to hydrodynamic topics since .for example , used linear stability analysis to show that the capacity of a river to develop alternate or multiple bars is closely linked to its aspect ratio ( river width divided by depth ) .this benchmark result is depicted in figure [ fig : cst ] and we will reproduce it later in the present work . + it is well - known though that in addition to water and sediment transport , riparian vegetation can play a crucial role in river pattern development ( see ) . in particular , it is recognized that riparian vegetation affects river morphology through modification of the flow field , bank strength and erosion / sedimentation processes in the riverbed / floodplain ( ) . however , due to the very complex nature of the dynamic interactions between vegetation and sediment transport and flow , riparian vegetation evolution was often not taken into account explicitly .instead , it was added as a correcting factor for bed roughness and bank stability ( ) . while the treatment of vegetation as a correction factor may be justified whenlooking at short timescales where riparian vegetation density does not change much , this is not the case for river pattern formation which occurs over much longer timescales and where vegetation takes an active role in the process .for instance , dynamic interaction between riparian vegetation and flow and sediment is thought to be crucial in the formation of anabranching river patterns on vegetated bars and in ephemeral rivers in dry regions ( figure [ fig : anabranch ] a and b ) .additionally , we can find similar patterns on the inside of a meandering bend in large streams ( scroll bars , figure [ fig : anabranch ] c ) .+ recently , researchers have added riparian vegetation dynamics to numerical morphodynamic models and included some of the feedback mechanisms that are thought to occur in nature .namely , took into account vegetation induced impedance to sediment transport and increase in bank stability and modeled the interaction of bank stabilizing vegetation and a meandering riverbed .furthermore , proposed an analytical morphodynamic model coupled with an equation for riverbed vegetation dynamics .but , until today vegetation dynamics was never included in a stability analysis of morphodynamic equations .in fact , several difficulties arise when trying to formulate a physical vegetation model suitable for stability analysis .for example , in modeling the sediment stabilizing effect of plant root systems is often taken into account as a threshold in the sediment transport function below which no erosion occurs .however , such a threshold possesses mathematical properties that are not suitable for a stability analysis .+ is the river s aspect ratio and is the dimensionless longitudinal wavenumber which characterizes the spatial periodicity of the bars , width=340 ] in the present work , we propose a minimal model for the evolution of riverbed vegetation density which takes into account only very basic mechanisms and is thus suitable for a stability analysis . using this vegetation model and a standard morphodynamic framework ,we would like to explore the possibility of such a coupled morphodynamic - vegetation system ( ecomorphodynamic equations ) to explain the formation of anabranching patterns .we would like to know which are the determining variables and to what extent the ecomorphodynamic analysis differs from the state of the art river morphodynamics .hence , we perform an analytical linear stability analysis on the linearized set of ecomorphodynamic equations which describe a model river whose riverbed is colonized by plants .this river is assumed to be of constant width with inerodible banks , the riverbed consists of cohesionless , erodible material ( sand / gravel ) of uniform size and the river s sediment transport capacity is thought to always exceed the threshold above which sediment transport occurs .additionally we assume sediment transport to be mainly bedload .+ we begin by formulating an equation which describes the evolution of riverbed vegetation density ( section [ sec : veg ] ) and discuss the different terms and its validity .important mechanisms to be considered are vegetation growth , distribution by means of seeding and resprouting , and death through flow impact induced uprooting .this equation is then coupled with a standard 1-dimensional and 2-dimensional river morphodynamic framework ( sections [ sec:1d_gov ] and [ sec:2d_gov ] respectively ) which consists of depth - averaged fluid and sediment continuity as well as a formulation for momentum balance in the fluid .these systems are subsequently linearized and perturbated around a spatially homogeneous solution and the conditions for which the wavelike perturbations amplify are investigated using linear stability analysis .this is done for a 1d - framework to study instability towards long sediment waves in section [ sec:1d ] and for a 2d - framework to study the formation of alternate and multiple bars .the main focus in this work is on highlighting the fundamental role that vegetation dynamics can have in this process together with known mechanisms of sediment dynamics .stability analysis of morphodynamic equations generally does not include the active role of vegetation explicitly due to the complex nature of the interaction mechanisms .hereafter , we develop an analytic model for riverbed vegetation dynamics and discuss its validity for different conditions . for simplicity ,we model vegetation as rigid , non - submerged cylinders with constant radius and submerged height equal to water depth .we then call the vegetation density defined as number of plants per unit area of riverbed and we model its growth by the logistic term with carrying capacity and specific vegetation growth rate .furthermore , we assume that vegetation growth is stimulated by nearby existing vegetation by means of seeding and resprouting ( i.e. positive local feedback ) and we model it using the diffusion term with the streamwise vegetation diffusion constant and the streamwise coordinate .we finally want to quantify vegetation death caused by flow drag for which we only consider the direct uprooting effect of flow drag on non - submerged and rigid vegetation ( type i mechanism after ) . in this case , a fluid parcel which impacts on the vegetation is decelerated from mean stream velocity to zero .furthermore , the rate of fluid that impacts on the vegetation is also proportional to stream velocity while the vegetation cross - section per cubic meter of river is proportional to water depth and vegetation density .we therefore propose the vegetation uprooting term ( see also ) where is a proportionality constant , the water depth and the streamwise velocity .putting together equations ( [ eq : veg1 ] ) to ( [ eq : veg3 ] ) we get the rate of change of vegetation density as however , in real rivers flow is not constant throughout the year .typically , large parts of a river s cross - section are only flooded during a limited amount of time per year which allows vegetation to colonize these surfaces during non - flooded periods. therefore , equation ( [ eq : veg_const ] ) , except for certain special cases ( see figure [ fig : veg_paolo ] , where vegetation seems to grow while being completely submerged most of the time ) , is not really applicable for vegetation growth in natural streams since it considers all processes to happen simultaneously . in realityhowever , vegetation grows and seeds during the vegetation period ( which is part of the non - flooded period ) and is uprooted during the flooding period . to simplify our analysis , we assume constant and continuous flow and thus we have to integrate growth and seeding into the flooding period . in the following , we call the drought period without vegetation growth , the vegetation period and the duration of the flooding ( see figure [ fig : timescales ] for illustration where and have been separated for simplicity ) .the duration of a complete cycle ( for example a year or half a year depending on the specific conditions ) is then given by .if we assume that vegetation density does not vary much during a complete cycle i.e. where is the value of at the end of cycle i , then we can approximate the difference by the continuous time derivative .we write the change of after one cycle as \frac{\tilde{t}_v}{\tilde{t}_d+\tilde{t}_v+\tilde{t}_f}-\tilde{\alpha}_d\tilde{y}\tilde{u}^2\tilde{\phi}_i\frac{\tilde{t}_f}{\tilde{t}_d+\tilde{t}_v+\tilde{t}_f}.\ ] ] by approximating the finite differences by derivatives we get \frac{\tilde{t}_v}{\tilde{t}_d+\tilde{t}_v+\tilde{t}_f}-\tilde{\alpha}_d\tilde{y}\tilde{u}^2\tilde{\phi } \frac{\tilde{t}_f}{\tilde{t}_d+\tilde{t}_v+\tilde{t}_f}\ ] ] and since we assumed , and to be constant , we can integrate them into the proportionality constants to end up with where , and .we can see that merging together the different mechanisms results in a relative increase of the growth and diffusion constant with respect to the uprooting constant if .so even if in general the vegetation uprooting coefficient is much higher than the growth coefficient , this can be compensated by the small timescale ratio to get a regime where mutual feedback is possible .this is the case for example in the marshall river ( see hydrograph in figure [ fig : hydrograph ] ) and also for bar flooding in the thur river ( see for example ) .thus , the differential equation ( [ eq : veg_fin ] ) may also be valid in the case of non - constant flow if the modeling assumptions are met .+ we quickly want to discuss two of the most important modeling assumptions adopted above , namely : * vegetation density change during a cycle is small compared to its actual value * the only uprooting effect is due to direct flow drag on non - submerged rigid vegetation the first assumption can be assumed to be valid if one considers the case of well developed vegetation . the vegetation coverage is dense enough to not allow much more biomass to be produced and at the same time a large part of the vegetation is robust enough to outlive the flooding period .the second point refers to the fact that we only consider direct flow drag ( thus neglecting erosion which exposes the root system ) .additionally , we need rigid vegetation like small trees or bushes with mean vegetation height greater than water depth in order for our assumption to be valid . for non - rigid vegetation , the exponent of in the uprooting term ( equation [ eq : veg3 ] ) should be somewhere between 1 and 2 while in the case of completely submerged vegetation the surface impacted by flow drag would be reduced by a factor of and thus , would be replaced by in equation ( [ eq : veg3 ] ) .in this section we perform a linear stability analysis of the 1-dimensional ecomorphodynamic equations . the 1d - framework is valid in case flow , bed and vegetation can be assumed to be homogeneous in the direction transverse to the flow .after the derivation of the dimensionless governing equations , linear stability is assessed .we first reproduce some well - known results ( and ) and then we go on to evaluate the effect that riverbed vegetation dynamics has on these results .figure [ fig : saintvenant ] depicts the model scheme adopted with the streamwise coordinate , the lateral ( normal ) coordinate ( not used in the 1d - analysis ) and the vertical coordinate .the riverbed of constant width is assumed to consist of sandy , non - cohesive material which causes friction and may be transported by the fully turbulent flow .furthermore , we consider the case of a straight channel ( see for example for curved channels ) with non - erodible banks .additionally , vegetation as described in section [ sec : veg ] is able to colonize the whole riverbed .then , assuming hydrostatic pressure distribution and the river width to be considerably larger than the flow depth , flow velocity may be depth - averaged and thus we get the well - known 1-dimensional de saint - venant momentum conservation law +\frac{\tilde{\tau}}{\tilde{y}}=0.\ ] ] where the first and second term are the local and convective acceleration respectively , the third term represents hydrostatic pressure distribution , term 4 is the streamwise slope of the river and term 5 is the bed friction term . recall that is the streamwise velocity and is the water depth , while is the bed elevation .we have to keep in mind that we are talking about long waves throughout our analysis ( pattern wavelength is larger than channel width ) in order for the depth - average as well as the 1d - formulation to make sense . as a closure relationship for bed friction , we choose the simple chezy equation and write where is the overall chezy coefficient .the overall chezy coefficient depends on both , the bed roughness and the roughness induced by vegetation . according to ,it can be expressed for non - submerged and rigid vegetation as where is the bed roughness which can be calculated by fixing manning coefficient , is the stokes drag coefficient and is the vegetation diameter .+ subsequently , flow continuity is formulated as thus neglecting flow diversion by vegetation and assuming that sediment density in the water is low , therefore omitting the sediment term .note that the flow diversion effect could easily be added but is left out here to keep the analysis simple . in order to account for sediment continuity, we then write the well - known 1d - exner equation , valid for non - cohesive sediment with uniform grain size as where is bed porosity and is sediment flux per unit width .we assume well - developed sediment transport ( always above the critical threshold ) , mainly in the form of bed load transport and therefore , as was done by , we adopt where is a parameter .this is an approximation of the original meyer - peter / m formula which states with the dimensionless shear stress and the critical dimensionless shear stress . omitting the threshold ( assuming sediment transport to be always above the threshold ) and knowing that is proportional to we get back our simplified power law . +finally , we model riverbed vegetation dynamics using as explained in section [ sec : veg ] . equations ( [ hans11 ] ) and equation ( [ hans13 ] )are conventionally called the de saint - venant s ( sv ) equations . if sediment dynamics is added , we speak of de saint - venant - exner equations ( sve ) or morphodynamic equations . since we added vegetation dynamics to sve , we name it the de saint - venant - exner - vegetation equations ( svev ) or the ecomorphodynamic equations . to perform a linear stability analysis , it is convenient to work with dimensionless quantities. therefore , to write equations ( [ hans11 ] ) and ( [ hans13 ] ) to ( [ hans15 ] ) in dimensionless form , we introduce the change of variables ( motivated by the approach of ) [ eq:1d_cha ] where is the normal water depth and is the velocity at normal water depth . using change of variables ( [ eq:1d_cha ] ) , we obtain ( arranged in a way to have the time derivative on the left - hand side ) [ eq:1d_dim ] -c_b\frac{u^2}{y}-c_v\phi u^2\\ \frac{\partial{y}}{\partial{t}}&=-y\frac{\partial{u}}{\partial{s}}-u\frac{\partial{y}}{\partial{s}}\\ \frac{\partial{\eta}}{\partial{t}}&=-\gamma u^2\frac{\partial{u}}{\partial{s}}\\ \frac{\partial{\phi}}{\partial{t}}&=\nu_g \phi(1-\phi)+\nu_d\frac{\partial{^2\phi}}{\partial{s^2}}-\nu_d\phi yu^2,\end{aligned}\ ] ] where , , , , , and . a linear stability analysis consists of studying the behavior of a linearized system when slightly perturbated away from a spatially homogeneous solution ( see ) . in the case of river morphology ,a common choice for a homogeneous solution consists of a river with flat bed and constant slope under constant , uniform flow conditions ( see section [ sec : veg ] for the generalization to non - constant flow ) .then , the reaction of the linearized system to small perturbations on every state variable is investigated whose physical meaning may be a variation in sediment supply or channel width for example ( . regardless of its shape , such a local perturbation can readily be interpreted as a wave packet and thus a velocity perturbation wave packet can be written as a fourier series with continuous wavenumber k where is the perturbation amplitude and the velocity perturbation . in a linear system , each sinusoidal component of the perturbation wave packet can then be treated separately to evaluate if there is growth towards periodic spatial patterns of the riverbed . in the following ,we first derive the homogeneous solution of ( [ eq:1d_dim ] ) and then linearize and perturbate the equations around the homogeneous solution .we begin with looking for spatially homogeneous solutions using normal flow conditions .so , , and ( where is the slope at normal flow conditions ) . using the dimensionless governing equations, we can find and as [ eq:1d_hom ] \\ \phi_0=&\frac{\nu_g-\nu_d}{\nu_g}.\end{aligned}\ ] ] note that the equations also allow a trivial solution with which corresponds to a riverbed without vegetation .this solution becomes the only physically relevant solution in case .since the aim of this work is to evaluate the influence of vegetation on river patterns , the solution with is not interesting .the non - zero dimensionless homogeneous solution can finally be summarized as with and as defined in ( [ eq:1d_hom ] ) .the linearization is done by introducing into the dimensionless equations ( [ eq:1d_dim ] ) the perturbated homogeneous solution with the perturbation parameter and the perturbation ansatz . as we want to look for regular spatial patterns , we choose the perturbation ansatz as with the real dimensionless perturbation wavenumber and the perturbation vector .while we are used to deal with sinusoidal patterns of velocity , water depth and bed elevation this is less common for vegetation density .figure [ fig : phi0 ] depicts sinusoidal vegetation density patterns around a mean vegetation density of .we can see that this formulation only is valid if is larger than the vegetation perturbation amplitude .in fact , if this is not the case we get locally negative values for vegetation density which does not make sense physically .so we have to bear in mind that vegetation needs to be well - developed in order for our analysis to be valid .+ the cosine of equation [ eq:1d_ansatz ] can then be written as it can easily be seen that we get a complex conjugated system of equations when inserting the perturbation ansatz into ( [ eq:1d_dim ] ) .thus , one can write the perturbated homogeneous solution as where c.c . denotes the complex conjugate .note that the perturbation term of ( [ eq:1d_perhom ] ) for a given wavenumber is nothing else than one component of the wave packet introduced in ( [ eq : wavepacket ] ) .then by only keeping the terms we get [ eq:1d_perteq ] the system of equations ( [ eq:1d_perteq ] ) can then be written as where a is the following 4 x 4 matrix : equations ( [ eq:1d_system ] ) and operator ( [ eq:1d_matrix ] ) define a system of ordinary , homogeneous differential equations with constant coefficients which describes the initial ( linear ) temporal evolution of the initially perturbated system . to find general solutions of this system , we have to introduce the concept of a normal operator : an operator is normal if , where is the complex conjugate transpose of .if was a normal operator , the matrix eigenfunctions would form an orthogonal basis and we could write the general solution as where i is the rank of the matrix ( 4 in this case ) , are coefficients and are the complex eigenvalues of . in the limit of large t , this solution is dominated by the exponential with the largest temporal growth rate ( maximum of the real parts of ) and thus the solution decays to zero if the maximum growth rate is below zero and it diverges for a positive maximum growth rate .however , in the context of river morphology a is not a normal operator and therefore its eigenfunctions do not form an orthogonal basis .that is , transient growth occurs ( ) and ( [ eq:1d_solution ] ) is not generally valid anymore .however , asymptotically the exponential with the largest real part of the eigenvalues is still going to dominate and thus describes the behavior of the system . as in this workwe are only interested in the long - term behavior of perturbations , we thus can still state that the initially small perturbations will be amplified in the long - term linear regime if the real part of any is positive . and if the largest growth rate occurs for a finite wavenumber , this mode would be amplified stronger than all other modes contained in the wave packet and thus would dominate after some time due to the exponential character of the growth rate .thus , we can retain the following important points : * the system is stable ( perturbation is not amplified ) with respect to a perturbation mode with wavenumber if * the system is unstable ( perturbation is amplified ) with respect to a perturbation mode with wavenumber if * the system is unstable towards regular spatial patterns if the highest growth rate occurs at finite wavenumber additionally , the phase velocity of a perturbation can be computed using the imaginary part of the eigenvalues as which gives information about the propagation of the perturbation : if then the perturbation propagates downstream and conversely if the perturbation propagates upstream . in this section ,the results of the stability analysis of matrix a , which was derived in section [ sec:1d_lin ] , are presented and interpreted .the eigenvalues are calculated numerically and plotted by mathematica for different parameter values while the pattern images are computed using matlab .additionally , in the simplest case of no vegetation and no sediment transport , the instability condition can be calculated analytically .the aim of the analysis is to find parameter regions where the fastest growing initial perturbation has a finite wavenumber and thus the system can evolve to a regular pattern upon perturbation . in the following two subsections ,we first repeat the calculations done by lanzoni et al ., 2006 for the cases of hydrodynamic equations ( sv ) and hydrodynamic equations coupled with sediment dynamics ( sve ) .then in [ sec:1d_sta_svv ] and [ sec:1d_sta_svev ] , we analyze the hydrodynamic equations coupled with vegetation dynamics ( svv ) and finally the hydrodynamic equations coupled with sediment and vegetation dynamics ( svev ) .+ .5 and phase velocity of sv equations as a function of wavenumber for ( orange ) , ( blue ) , ( green ) , ( black ) ; parameter values are and ,title="fig:",width=283 ] + .5 and phase velocity of sv equations as a function of wavenumber for ( orange ) , ( blue ) , ( green ) , ( black ) ; parameter values are and ,title="fig:",width=283 ] + the stability analysis of the de saint - venant equations only consists of analyzing a 2 x 2 matrix ( taking the upper left part of matrix a with ) which gives the following characteristic equation for the eigenvalues : solving this equation for , we get the condition , independently from and .figure [ fig_sv1 ] shows the temporal growth rate for different values of .for the growth rate is zero . if however , the growth rate increases asymptotically with increasing while for it is always negative.this means that perturbations are amplified only if and that the most unstable mode is the one where the wavenumber tends to infinity and so the wavelength is equal to zero . according to this instability is linked to the formation of roll waves .but , in the linear regime no instability towards regular patterns with finite wavelength is possible in a river with fixed bed and without riverbed vegetation .+ additionally , figure [ fig_sv2 ] shows that perturbations propagate downstream only if the flow is subcritical and in both directions if flow is supercritical .if sediment dynamics is added to the de saint - venant equations ( which means that the bed material may be transported by the flow ) , the eigenvalues of a 3 x 3 matrix have to be computed ( upper left part of a ) .it turns out that if the morphodynamic timescale is small compared to the hydrodynamic timescale which is normally the case ( , see for some values ) , the first two ( hydrodynamic ) modes are essentially the same as in the previous paragraph .there is however a third mode ( called morphodynamic mode ) that appears because of sediment dynamics .the temporal growth rate and the phase velocity are depicted in figure [ fig_sve ] for the morphodynamic mode only ( scaled by ) .+ the growth rate of the morphodynamic mode is below zero ( equal to zero at k=0 ) for all values of and which means that the morphodynamic mode is ( as were the hydrodynamic modes ) not able to produce instability towards finite patterns . finally , as we can see from figure [ fig_sve4 ]the migration of the perturbations is downstream if the flow is subcritical and upstream if the flow is supercritical .all results of paragraphs [ sec:1d_sta_sv ] and [ sec:1d_sta_sve ] are in agreement with the findings of which confirms the correctness of the stability analysis performed ..5 and phase velocity of the morphodynamic mode of sve equations as a function of wavenumber for ( orange ) , ( blue ) , ( green ) , ( black ) ; parameter values are , and ,title="fig:",width=283 ] .5 and phase velocity of the morphodynamic mode of sve equations as a function of wavenumber for ( orange ) , ( blue ) , ( green ) , ( black ) ; parameter values are , and ,title="fig:",width=283 ] the effect of taking sediment dynamics into account was shown in the previous section .now , we analyze the de saint - venant equations coupled with vegetation dynamics ( but with a fixed bed geometry ) .again , no analytical solution is available and thus we analyze numerically for different parameters .+ .values for constant parameters of the analysis [ cols="^,^,^,^ " , ] [ table:2d_sve ] and aspect ratio : no instability ( red ) , alternate bars ( light blue ) , multiple bars ( darker blues for increasing bar order ) ; for parameter values see table [ table:2d_sve],title="fig:",width=491 ] + it was also observed in nature and found using linear stability theory ( ) that the higher the aspect ratio of a river the higher the bar order ( number of bars in the transverse direction ) is .figure [ fig:2d_sve3 ] depicts parameter domains where alternate bar and multiple bar regimes respectively dominate ( based on the highestl growth rate ) .the aspect ratio seems to be the decisive parameter in a reasonable range of froude numbers between 1 and 2 .however , the froude number for flooding in the marshall river is between 0.3 and 0.4 according to and thus is also important to determine the bar regime of a river .and , once the froude number exceeds a certain maximum value , instability towards bar formation does no longer exist .overall , we can say that results of earlier stability analyses of bar instability could be reproduced in this work .bar instability triggered by 2-dimensional sediment dynamics is sensitive to a river s aspect ratio ( width to depth ratio ) and also to froude number for low values of .finally , it is worth noting that the term accounting for gravitational effects of a weak lateral slope ( second term of ( [ eq : weak_slope ] ) ) is crucial in order to reproduce this well - known result . was the first to propose this relation which was later confirmed experimentally by with both suggesting the parameter to be between 0.5 and 0.6 . for different froude numbers , the black dots mark the maximum of each curve ; parameters used : , , , , , , and table [ table_param ] + , title="fig:",width=491 ] + as it was done in the analysis of the 1-dimensional equations , we want to analyze separately the effect of vegetation dynamics in order to better understand its potential contribution to pattern formation .thus , a fixed bed is presumed and the eigenvalues of equation ( [ eq:2d_matrix ] ) removing line 4 and column 4 are analyzed in the following .+ first , we want to look at instability towards alternate bars ( m=1 ) and we can see in figure [ fig:2d_svv_f ] that , similarly to the 1d analysis , we can find the maximum growth rate to be at finite longitudinal wavenumber for a certain range of froude numbers , which means that instability towards finite patterns exists . yet , as explained before , in the 2d - case we are also interested in patterns with due to their similarity with channels in the marshall river .such longitudinally homogeneous patterns seem to occur at slightly higher froude numbers than patterns with finite , but not too high in order to still allow interaction between vegetation growth and mortality .note that for the moment longitudinal as well as lateral vegetation diffusion ( seeding and resprouting ) are put to zero .+ and aspect ratio : the color code indicates relative growth rate and a value of ( red ) means that no patterns exist ; the black line shows the maximum growth rate for given and thus indicates the value of the dominating longitudinal wavenumber ; parameter values are , , , , , , and values indicated in table [ table_param],title="fig:",width=491 ] + and aspect ratio : the color code indicates relative growth rate and a value of ( red ) means that no patterns exist ; the black line shows the maximum growth rate for given and thus indicates the value of the dominating longitudinal wavenumber ; parameter values are , , , , , , and values indicated in table [ table_param],title="fig:",width=491 ] + so at first glance , vegetation dynamics behaves quite alike in a 2d model than in the 1d one . to compare the instability created by vegetation dynamics to the one induced by sediment dynamics, we can have a look at figure [ fig:2d_svv1 ] .surprisingly , the pattern domain features some characteristics that are quite similar to the one depicted in figure [ fig:2d_sve1 ] : a minimum value is required for the aspect ratio and the dominating longitudinal wavenumber increases with increasing aspect ratio .however , as visible in both figures [ fig:2d_svv_f ] and [ fig:2d_svv1 ] the maximum growth rate tends to occur at considerably higher for vegetated rivers which could be thought to be physically unrealistic ( too short pattern wavelength could undermine the hypothesis of shallow water equations ) .one has to bear in mind though that in the 2d model length scales are normalized with respect to half - river - width and not normal water depth .thus , if we take , and for example we get which is much smaller than river width but still in a reasonable order of magnitude . by tuning the parameters, we can quite easily get wavelengths that make more sense .for instance if we set the the vegetation diffusion rate we get dimensionless longitudinal wavenumbers on the order of 3 to 4 for .this leads to a physical longitudinal wavelength between 30 and 40 meters which is close to the actual river width and thus much more realistic ( compare figure [ fig:2d_svv1_d ] to figure [ fig:2d_svv1 ] to see the influence of vegetation diffusion on the dominating wavenumber ) .we conclude that the inclusion of vegetation diffusion contributes to a more physically realistic result and we thus keep this value constant in the following analyses , keeping in mind though that it is not an absolutely indispensable part of the pattern producing mechanism .+ and aspect ratio : no instability ( red ) , alternate bars ( light blue ) , multiple bars ( darker blues ) do not occur ; parameter values are , , , , and values indicated in table [ table_param ] + , title="fig:",width=491 ] + and aspect ratio : the color code indicates the value of the most unstable longitudinal wavenumber , negative numbers ( red ) mean no instability ; parameter values are , , , , , and values indicated in table [ table_param ] + , title="fig:",width=491 ] + and aspect ratio : the color code indicates the value of the most unstable longitudinal wavenumber , negative numbers ( red ) mean no instability ; parameter values are , , , , , and values indicated in table [ table_param ] + , title="fig:",width=491 ] + once we have seen the similarities of alternate bars inducing mechanisms of sediment and vegetation dynamics , we would like to know if that is still true for the formation of multiple bars .thus , we repeat the multiple bar analysis of figure [ fig:2d_sve3 ] including vegetation dynamics instead of sediment dynamics . the result can be seen in figure [ fig:2d_svv3 ] : as expected , only a small range of froude numbers allows pattern formation which is due to vegetation growth balance ( as explained in section [ sec:1d_sta_svv ] ) .surprisingly though , as opposed to the results of sediment dynamics , alternate bars grow always faster than multiple bars in the linear regime .this means that a fixed riverbed that is under the influence of vegetation dynamics only does not tolerate instability towards multiple bars .however , a number of vegetated patterns that occur in nature exhibit multiple bars ( up to 10 for the marshall river , even more in the case of rills on fluvial bars , see figure [ fig : anabranch ] ) .this apparent contradiction of the model and reality could still be due to the fact that sediment transport , whose influence will be analyzed in the next section , was not considered until here .+ figure [ fig:2d_svv3 ] only shows which kind of bars are occurring for a certain parameter configuration , but it does not give any information about the dominant longitudinal wavenumber .we could also wonder if in reality there is no instability towards multiple bars at all since we only can see the alternate bar domain in the former figure . to answer these questions , we can have a look at figures [ fig:2d_svv3_1 ] and [ fig:2d_svv3_4 ] which show the instability domain and most unstable longitudinal wavenumber for the case of alternate bars ( m=1 ) and multiple bars ( m=4 ) respectively .we can see that instability towards multiple bars does indeed exist but its growth rate being always smaller than the growth rate of alternate bars we can not perceive it in figure [ fig:2d_svv3 ] .an interesting feature that is visible on both figures is that about half the domain seems to have the dominating longitudinal wavenumber equal to 0 which means that riverbed vegetation ( and also flow and depth ) are longitudinally homogeneous in this domain .+ we then want to have a closer look at the vegetation growth balance which at first seems to be quite similar than in the 1-dimensional model .indeed , the versus plot in figure [ fig:2d_svv_phi_old ] strongly resembles what we have seen before for vegetation dynamics in a 1d river . to the right ( in orange ) , there is a domain where no physically possible solutions can exist while in the middle there is a domain ( blue ) where vegetation growth and death through uprooting are balanced to allow formation of vegetation patterns at finite longitudinal wavenumber .the only major difference lies in the fact that the domain with dominating longitudinal wavenumber equal to zero ( green ) may be interpreted as a pattern forming domain due to the lateral wavenumber being finite ( as explained before ) .this domain with instability towards longitudinally homogeneous patterns ( ) is slightly wider than the corresponding one we saw in the 1d - analysis ( i.e. small red band in figure [ fig_pattern_old ] ) which is due to the diffusion coefficients and being put to a non - zero value in the 2d - analysis .the fact that in the 2d - analysis the pattern domain directly borders on the domain with non - physical solution means that this domain boundary can be given analytically using the condition , which yields : and vegetation carrying capacity : the color code indicates the value of the most unstable longitudinal wavenumber , negative numbers ( red ) mean no instability ; parameter values are , , , , , and values indicated in table [ table_param],title="fig:",width=491 ] + we can also understand now why in the 1d analysis increasing the vegetation diffusion coefficient lead to a decreasing pattern domain ( see figure [ fig_pattern_d ] ) .in fact , increasing the diffusion coefficient increases the part of the domain with the dominant longitudinal wavenumber equal to zero .but , in the 1d - context , a longitudinal wavenumber equal to zero means that no patterns exist since no lateral variability is possible . in contrast to figure [ fig_pattern_d ] , if we plotted versus the diffusion coefficient for the 2d - analysis , we would just get two straight boundaries at constant froude number .+ equation ( [ eq : froude_boundary ] ) also allows us to calculate the boundary at the higher froude number of figures [ fig:2d_svv3_1 ] and [ fig:2d_svv3_4 ] ( boundary is independent of bar order ) : furthermore , figure [ fig:2d_svv_phi_old ] shows that longitudinally homogeneous patterns ( vegetated , longitudinal channels ) only occur at very low relative vegetation density .we write relative density because is normalized using which means that in case is large the real vegetation density does not necessarily have to be small .figure [ fig:2d_svv_phi_c ] shows the dominating longitudinal wavenumber in the space and we can see that the same conclusions regarding vegetation balance are true than for the 1d - analysis : the largest longitudinal wavenumber occurs if growth and mortality through uprooting are well balanced , larger occur for larger values of froude number and vegetation carrying capacity . concluding the analysis of the vegetation growth balance, we can say that , as in the 1d model , the pattern domain seems to be simply connected ( one domain without holes ) and it continues to open up as goes to infinity .instability towards alternate bars could be detected , but not towards multiple bars .+ and aspect ratio : the color code indicates relative growth rate and a value of ( red ) means that no patterns exist ; the black line shows the maximum growth rate for given and thus indicates the value of the dominating longitudinal wavenumber ; parameter values are , , , , , , , and values indicated in tables [ table_param ] and [ table:2d_sve],title="fig:",width=491 ] + and aspect ratio : the color code indicates relative growth rate and a value of ( red ) means that no patterns exist ; the black line shows the maximum growth rate for given and thus indicates the value of the dominating longitudinal wavenumber ; parameter values are , , , , , , , and values indicated in tables [ table_param ] and [ table:2d_sve],title="fig:",width=491 ] + and aspect ratio : the color code indicates relative growth rate and a value of ( red ) means that no patterns exist ; the black line shows the maximum growth rate for given and thus indicates the value of the dominating longitudinal wavenumber ; parameter values are , , , , , , , and values indicated in tables [ table_param ] and [ table:2d_sve],title="fig:",width=491 ] + after having studied separately the effects on 2-dimensional river pattern formation of sediment dynamics and vegetation dynamics , we finally move on to the instability analysis of the complete 2d - model which includes sediment and vegetation dynamics and thus represents a river with movable bed and vegetation coverage .we saw before that sediment dynamics as well as vegetation dynamics are able to induce pattern formation in certain domains of the parameter space .we now want to know which of these effects remain or whether they even combine to form something not seen in the analysis of either sediment or vegetation dynamics alone .+ we start with looking at figure [ fig:2d_svev1 ] ( ) which indeed shows positive growth rates for a range of aspect ratio .clearly , this pattern domain seems to be a superposition of figures [ fig:2d_sve1 ] ( note that this figure is with instead of ) and [ fig:2d_svv1 ] with sediment influenced positive growth rates to the left and vegetation influenced ones to the right .both parts of the pattern domain have a lower boundary ( minimum aspect ratio ) , but the domain allegedly created by sediment dynamics has positive growth rates at lower longitudinal wavenumbers ( higher wavelenghts ) than vegetation dynamics .figure [ fig:2d_svev1_4 ] shows the same phenomena for multiple bars ( m=4 ) . in both figures, we can see that for these parameters ( see captions ) sediment dynamics does determine the dominating longitudinal wavenumber ( black line ) for lower values of aspect ratio while vegetation dynamics is dominant at higher aspect ratios ( around 10 for alternate bars and around 40 for multiple bars in figure [ fig:2d_svev1_4 ] ) . and aspect ratio : the color code indicates the value of the most unstable longitudinal wavenumber , negative numbers ( red ) mean no instability ; parameter values are , , , , , , and values indicated in tables [ table_param ] and [ table:2d_sve ] + , title="fig:",width=491 ] + and aspect ratio : the color code indicates the value of the most unstable longitudinal wavenumber , negative numbers ( red ) mean no instability ; parameter values are , , , , , , and values indicated in tables [ table_param ] and [ table:2d_sve ] + , title="fig:",width=491 ] + then , if we take a much lower value for ( along with a lower vegetation carrying capacity of ) we get a completely different picture which is shown in figure [ fig:2d_svev1_f01 ] : all of a sudden , the pattern domain completely changes and we have only one domain that occurs with a lower and upper limit for the aspect ratio as well as a limited domain for longitudinal wavenumber that .figure [ fig:2d_svev3 ] depicts the same situation from another angle , namely in the - space .we can identify one single parameter domain leading to instability including a lower limit for a certain value for .however , if we reduce vegetation carrying capacity ( thus decreasing the influence of vegetation ) , we can see that two different instability domains appear in figure [ fig:2d_svev3_phi ] which was not the case in the 1d analysis .not surprisingly though , the larger domain to the right resembles strongly the vegetation induced domain already seen in section [ sec:2d_sta_svv ] which also means that the domain to the left should probably be due to sediment dynamics . comparing figures [ fig:2d_svev3 ] and [ fig:2d_svev3_phi ], we can also see that the domain at larger froude numbers decreases and is slightly more to the left in the latter figure .this is another hint that the right domain comes from vegetation dynamics : when decreasing vegetation carrying capacity , the froude number has to decrease as well otherwise the river s uprooting capacity would overwhelm vegetation growth .+ and aspect ratio : the color code indicates the value of the most unstable longitudinal wavenumber , negative numbers ( red ) mean no instability ; parameter values are , , , , , , and values indicated in tables [ table_param ] and [ table:2d_sve ] + , title="fig:",width=491 ] + next , we would like to know what happens to instability towards multiple bars . in the previous sections , we could show that instability towards multiple bars exists for flow dynamics coupled with sediment dynamics but not for flow dynamics coupled with vegetation dynamics .figure [ fig:2d_svev3_phi_4 ] shows the domains and dominating longitudinal wavenumber for multiple bars ( m=4 ) .we can see that both domains are slightly shifted upwards ( towards positive ) which was already seen before in the analysis with sediment dynamics .moreover , higher order multiple bars develop higher longitudinal wavenumbers and thus shorter longitudinal wavelengths than alternate bars .+ one important question remains though : which bar order will dominate in parameter domains where alternate bars and several orders of multiple bars can potentially exist ?figure [ fig:2d_svev3_mult ] answers this question partially by showing that in the case of a movable bed with vegetation the domain to the right is always unstable towards alternate bars .sediment dynamics induced patterns are not visible due to vegetation processes completely dominating river bed dynamics .figure [ fig:2d_svev3_mult_phi10 ] shows what happens in not very highly vegetated riverbeds ( ) . as in the case of alternate bars ( figure [ fig:2d_svev3_phi ] ) ,there is sediment induced instability to the left , but the domain to the right seems to contain both sediment incued instability ( towards higher order of multiple bars with increasing ) and vegetation induced instability to the very right .this means that instability towards multiple bars in vegetated rivers is indeed possible , but only at rather low froude numbers ( either at about f=0.2 - 0.3 or at f=0.6 - 0.7 in this case ) . yet ,as it can be seen in figures [ fig:2d_svev3 ] to [ fig:2d_svev3_phi_4 ] , instability towards patterns with very low longitudinal wavenumbers occurs only in the domain at higher froude number . at very low froude numbers ,only rather high wavenumbers are reached in the asymptotic limit and thus , longitudinal channels ( which are characterized by close to zero ) do not occur . since the froude number for flooding events in the marshall river is rather low ( 0.3 - 0.4 , ), our results do not predict channels for this river which is contrary to reality .however , for not very highly vegetated riverbeds ( in figure [ fig:2d_svev3_mult_phi10 ] ) multiple bars may occur at froude numbers up to about 0.7 .in such a parameter configuration , we can thus have instability towards multiple bars for the marshall river , but at rather high longitudinal wavenumbers , thus resembling more a braiding pattern than a channeled riverbed .+ and aspect ratio : no instability ( red ) , alternate bars ( light blue ) , multiple bars ( darker blues for increasing bar order ) ; parameter values , , , , , and values indicated in tables [ table_param ] and [ table:2d_sve ] + , title="fig:",width=491 ] + and aspect ratio : no instability ( red ) , alternate bars ( light blue ) , multiple bars ( darker blues for increasing bar order ) ; parameter values , , , , , and values indicated in tables [ table_param ] and [ table:2d_sve ] + , title="fig:",width=491 ] + interestingly , figure [ fig:2d_svev3_phi ] indicates that with decreasing vegetation carrying capacity the two parts of the domain move closer together which implies that they eventually could merge once falls below a certain threshold .we want to have a closer look at this by plotting against froude number in figure [ fig:2d_svev_phi_c ] .although it is difficult to separate sediment induced instability from vegetation induced , this can be done when the current figure is compared to figure [ fig:2d_svv_phi_c ] .the sediment domain consists of the thickening of the domain to the very left as well as the thin greenish domain that joins the vegetation induced domain at the left .then , it can be seen that the domains actually merge when is low enough .this means that although the pattern domain as a whole is not simply connected anymore ( there are holes in the domain ) it is still is connected .+ to conclude , we again plot vegetation carrying capacity against in figure [ fig:2d_svev_phi_old ] and include contour lines for dimensionless homogeneous vegetation density ( in black ) as well as physical homogeneous vegetation density ( plants per , in yellow ) . as explained earlier , we need well - developed vegetation ( meaning well above zero ) in order to not have negative vegetation density because of the sinusoidal oscillations .this means that the vegetation density wave amplitude always has to be smaller than the actual vegetation density .we do nt have a way to know the wave amplitude , but still at least we have to wonder whether the model is valid for the part of the pattern domain close to the line .this would mean that the part of the domain where the dominant could be actually non - physical due to the assumptions not met .anyway , such patterns with dominant longitudinal wavenumber equal to zero would be alternate bars ( remember that alternate bars always grow faster than multiple bars in this region of pattern domain ) .this would result in an asymmetric channel where either the left or the right side would be filled with sediment while water is flowing on the other side .thus , what makes actually physically sense in figure [ fig:2d_svev_phi_c ] is instability towards alternate bars with finite at higher froude numbers and instability towards multiple bars with rather high at lower froude numbers .+ we now quickly want to give an estimate for a longitudinal wavenumber of multiple bars . looking at figure [ fig:2d_svev3_phi_4 ] for example , we can see that a typical longitudinal wavenumber is for m=4 in the region where these multiple bars dominate .this yields for which is not completely unreasonable as an order of magnitude , but too low in comparison of a river width of .thus , the parameter values of the model would need to be checked and investigated order to get a more reasonable result .+ and vegetation carrying capacity : the color code indicates the value of the most unstable longitudinal wavenumber , negative numbers ( red ) mean no instability ; parameter values are , , , , , , and values indicated in tables [ table_param ] and [ table:2d_sve],title="fig:",width=491 ] + finally , we want to find out whether there is evidence that the pattern domain at low froude numbers might be linked to actual vegetation density in the river .we can see on figure [ fig:2d_svev_phi_old ] that the domain to the left can best be characterized by a very high ( black lines , see figure [ fig:2d_svv_phi_old ] for values ) .this is astonishing to some extent since this domain is thought to be governed by sediment dynamics .but then again , a very high relative ( dimensionless ) vegetation density does not necessarily mean a lot of variation .so , one could describe this domain to be representative for rivers with stable vegetation density close to carrying capacity .the vegetation would not be influenced much by flow due to low uprooting capacity ( low froude number ) and sediment dynamics would thus be governing the river s instability mechanisms in this region of the parameter space as we suspected in the beginning .we reproduced known results of river instability towards alternate and multiple bars using a 2-dimensional de saint - venant - exner framework .as expected , the higher a river s width - to - depth ratio ( aspect ratio ) the more bars a river tends to develop laterally , leading to the formation of multiple bars .the froude number does has a crucial role at low f and no patterns exist above roughly . when analyzing the 2d - de saint - venant equation combined with vegetation dynamics , we find , similarly to the 1-dimensional model , that there exists a domain where vegetation growth and mortality by means of uprooting compete and thus instability towards finite patterns prevails .however , a minimum value for the aspect ratio is required to induce such instability and only instability towards alternate bars occurs ( the exponential growth rate of alternate bars in the linear regime always exceeds the growth rate of multiple bars of any order ) .the froude number , which is directly proportional to stream velocity , is very important to balance vegetation dynamics since a higher velocity increases the river s uprooting capacity ( see equation ( [ eq : veg3 ] ) ) .+ when analyzing the full model including sediment as well as vegetation dynamics , a pattern domain of essentially two parts is detected .one part occurs at low froude numbers and high dimensionless vegetation density ( independently of actual vegetation carrying capacity ) and mainly possesses features of sediment transport induced instability : independence of froude number , multiple bar order increases with increasing aspect ratio and higher dimensionless wavenumber for higher aspect ratios .the other part stems from vegetation growth balance and inherits equally its attributes : domain highly dependent on froude number , instability towards alternate bars with rather low longitudinal wavenumbers at low vegetation density , but no instability towards multiple bars .the two parts of the pattern domain are separated for higher values of vegetation carrying capacity and linked when falls below a certain threshold ( which depends on the other vegetation parameters ) .the goal of this work was to shed light on the influence of riparian vegetation on formation of morphological river patterns .we thus performed a linear stability analysis on the 1d and 2d ecomorphodynamic equations which include analytical models for flow , sediment as well as vegetation dynamics .the vegetation model was kept very simple in order to be suitable for a stability analysis and it included terms for vegetation growth , diffusion through seeding / resprouting and mortality by means of uprooting caused by flow shear . at first , this equation was developed for rivers with constant flow , but was shown to also apply ( under certain conditions ) to variable flow and even ephemeral rivers .+ our analysis of the 1d model showed that vegetated rivers indeed exhibit instability towards longitudinal sediment waves due to the competitive interaction between vegetation growth and mortality .then , instability towards river patterns with lateral structure ( bars ) was assessed using the 2d ecomorphodynamic equations and it was discovered that two different kinds of instabilities occur .instability at lower froude numbers is mainly driven by sediment dynamics and leads to formation of alternate and multiple bars with the bar order increasing with increasing river width - to - depth ratio ( aspect ratio ) . at higher froude numbers ,only instability towards alternate bars was detected , independently of the aspect ratio .+ we also looked at the value of the most unstable longitudinal pattern wavenumber .generally , it was found that the values identified were reasonable ( corresponding wavelength has the same order of magnitude than river width ) although sometimes at the higher limit of what is allowed by the model assumptions .however , we were not able to identify instability towards longitudinally infinite multiple channels as it occurs in some reaches of the marshall river ( figure [ fig : anabranch ] b ) .actually , the longitudinal wavenumbers found for multiple bar instability at low froude numbers were too high to form such channels .other parameter configurations or more detailed modeling would probably be required to match reality in this case .+ in general , experimental verification of the equation adopted and research on the parameter values would be needed to adjust our purely theoretical model to reality .in addition , flume experiments could also help to quantify other aspects of vegetation and sediment transport interaction that we did not take into account .for instance , we only model the effect of vegetation on sediment transport indirectly by increasing the river s bed roughness . yet , other processes like scouring that increases sediment ablation around plants ( see for a scouring model around bridge piers ) and riverbed stabilization by plant s root systems are probably important as well but too difficult to model analytically at this stage .further improvement of the vegetation model could include finding expressions for vegetation uprooting by gradual exposure of plants root system ( type ii mechanism of ) and flow diversion produced mainly by rigid vegetation .+ finally , since we were dealing with a non - normal operator in our stability analysis , transient growth of the system can occur which was not considered in the present work .therefore , as was done by for the morphodynamic equations , a non - modal analysis of our linear ecomorphodynamic operator is conceivable in the future to evaluate the importance of such transient growths .the reason is that sometimes this transient behavior can actually be more relevant in reality than the asymptotic fate depending on the timescale of interest .moreover , in addition to performing a stability analysis which only takes into account growth or decay at the linear level , we could extend our research by adding a non - linear numerical simulation of the initial perturbations .in fact , as the perturbations amplify non - linearities of the system may become important and eventually dominate , thus determining the asymptotic fate of the system . in any case , all these possibilities of improvement of current modeling of interaction of riparian vegetation and river morphology show that we are still barely scratching the surface of this complex subject .baptist m. , babovic v. , rodriguez uthuruburu j. , keijer m. , uittenbogaard r. e. , mynett a. , verwey a. 2007 . on inducing equations for vegetation resistance ._ journal hydr .45(4 ) : 435 - 450 .blondeaux p. , seminara g. 1985 .a unified bar - bend theory of river meanders ._ j. fluid mech .449 - 470 .callander r.a .instability and river channels ._ j. fluid .36 , part 3 , pp .465 - 480 .camporeale c. , perucca e. , ridolfi l. , gurnell a. m. 2013 .modeling the interactions between river morphodynamics and riparian vegetation ._ reviews of geophysics _ 51:1 - 36 .camporeale c. , ridolfi l. 2009 .nonnormality and transient behavior of the de saint - venant - exner equations ._ water resources research _ vol.45 , w08418 .chiodi f. , andreotti b. , claudin p. 2012 .the bar instability revisited ._ j. fluid ._ , under consideration colombini m. , seminara g. , tubino m. 1987 .finite - amplitude alternate bars , _ j. fluid mech .213 - 232 .edmaier k. , burlando p. , perona p. 2011 .mechanisms of vegetation uprooting by flow in alluvial non - cohesive sediment .earth syst ._ 15:1615 - 1627 .engelund f. 1981 .the motion of sediment particles on an inclined bed . _ techdenmark isva prog .15 - 20 . engelund f. , skovgaard o. 1973 . on the origin of meandering and braiding in alluvial streams ._ j. fluid mech .57 , part 2 , pp .289 - 302 . federici b. and paola chris .dynamics of channel bifurcations in noncohesive sediments ._ water resources research _ vol .39 , no . 6 , 1162 .federici b. and seminara g. 2003 . on the convective nature of bar instability ._ j. fluid mech ._ vol.487 , pp .125 - 145 .gurnell a. , petts g. e. 2006 .trees as riparian engineers : the tagliamento river , italy . _ earth surface processes and landforms _ 31:1558 - 1574 .jansen j. , nanson g. c. 2010 .functional relationship between vegetation , channel morphology and flow efficiency in an alluvial ( anabranching ) river ._ j. geoph .res . _ 115,f07441 .lanzoni s. , siviglia a. , frascati a. , seminara g. 2006 .long waves in erodible channels and morphodynamic influence ._ water resources research _ vol.42 , w06d17 . melville b. w. , sutherland a. j. 1988 .design method for local scour at bridge piers. _ j. hydraul ._ 114:1210 - 1226 .murray a. b. , paola c. 2003 .modelling the effect of vegetation on channel pattern in bedload rivers . _ earth surface processes and landforms _ 28 , 131 - 143 .parker g. 1976 .on the cause and characteristic scales of meandering and braiding in rivers ._ j. fluid mech .76 , part 3 , pp .457 - 480 .pasquale n. , perona p. , schneider p. , shrestha j. , wombacher a. , burlando p. 2011 .modern comprehensive approach to monitor the morphodynamic evolution of a restored river corridor .earth syst ._ 15 , 1197 - 1212 .perona p. , crouzy b. , mclelland s. , molnar p. , camporeale c. 2014 .ecomorphodynamics of rivers with converging boundaries . _ earth surface processes and landforms _ , in press .perucca e. , camporeale c. , ridolfi l. 2007 .significance of the riparian vegetation dynamics on meandering river morphodynamics ._ water resources research _ vol .43 , w03430 .seminara g. 2010 .fluvial sedimentary patterns .fluid mech ._ 42,43 - 66 .talmon a.m. , struiksma n. , van mierlo m.c.l.m .laboratory measurements of the direction of sediment transport on transverse alluvial - bed slopes ._ journal of hydraulic research _ vol .4 . tooth s. , nanson g. c. 2000 . forms and processes of two highly contrasting rivers in arid central australia , and the implications for channel pattern discrimination and prediction ._ geological society of america bulletin _ 116(7 - 8 ) , 802 - 816 .turing a. m. 1952 the chemical basis of morphogenesis . _philosophical transactions of the royal society of london _ series b , biological sciences 237(641 ) , 37 - 72 .wu w. , shields jr .f. d. , bennett s. j. , wang s. s. y. 2005 . a depth - averaged two - dimensional model for flow , bed transport and sediment topography in curved channels with riparian vegetation . _ water resources research _ vol 41 , w03015 . | although riparian vegetation is present in or along many water courses of the world , its active role resulting from the interaction with flow and sediment processes has only recently become an active field of research . especially , the role of vegetation in the process of river pattern formation has been explored and demonstrated mostly experimentally and numerically until now . in the present work , we shed light on this subject by performing a linear stability analysis on a simple model for riverbed vegetation dynamics coupled with the set of classical river morphodynamic equations . the vegetation model only accounts for logistic growth , local positive feedback through seeding and resprouting , and mortality by means of uprooting through flow shear stress . due to the simplicity of the model , we can transform the set of equations into an eigenvalue problem and assess the stability of the linearized equations when slightly perturbated away from a spatially homogeneous solution . if we couple vegetation dynamics with a 1d morphodynamic framework , we observe that instability towards long sediment waves is possible due to competitive interaction between vegetation growth and mortality . moreover , the domain in the parameter space where perturbations are amplified was found to be simply connected . subsequently , we proceed to the analysis of vegetation dynamics coupled with a 2d morphodynamic framework , which can be used to evaluate instability towards alternate and multiple bars . it is found that two kinds of instabilities , which are discriminated mainly by the froude number , occur in a connected domain in the parameter space . at lower froude number , instability is mainly governed by sediment dynamics and leads to the formation of alternate and multiple bars while at higher froude number instability is driven by vegetation dynamics , which only allows for alternate bars . |
recent analyses of extracellular recordings performed in two motor areas of behaving monkeys have tried to clarify how information about movements is trasmitted and received from higher to lower stages of processing , and to identify distinct roles of the two areas in the planning and execution of movements .although this study failed to produce clearcut results , it remains interesting to try and understand , from a more theoretical point of view , how information about multi - dimensional correlates of neural activity may be transmitted from the input to the output of a simple network .in fact , a theoretical study is still lacking , which explores how the coding of stimuli with continuous as well as discrete dimensions is transferred across a network . in a previous report the mutual information between the activity ( ` firing rates ' ) of a finite population of units ( ` neurons ' ) and a set of correlates , which have both a discrete and a continuous angular dimension , has been evaluated analytically in the limit of large noise .this parametrization of the correlates can be applied to movements performed in a given direction and classified according to different `` types '' ; yet it is equally applicable to other correlates , like visual stimuli characterized by an orientation and a discrete feature ( colour , shape , etc .. ) , or in general to any correlate which can be identified by an angle and a `` type '' . in this study , we extend the analysis performed for one population , to consider two interconnected areas , and we evaluate the mutual information between the firing rates of a finite population of output neurons and a set of continuous+discrete stimuli , given that the rate distribution in input is known . in input , a threshold nonlinearity has been shown to lower the information about the stimuli in a simple manner , which can be expressed as a renormalization of the noise .how does the information in the output depend on the same nonlinearity ?how does it depend on the noise in the output units ? is the power to discriminate among discrete stimuli more robust to transmission down one set of random synapses , than the information about a continuously varying parameter ?we address these issues by calculating the mutual information , using the replica trick and under the assumption of replica symmetry ( see for example ) .saddle point equations are solved numerically .we analyze how the information trasmission depends on the parameters of the model , i.e. the level of output and input noise , on the ratio between the two population sizes , as well as on the tuning curve with respect to the continuous correlate , and on number of discrete correlates .the input - output transfer function is a crucial element in the model .the binary and the sigmoidal functions used in many earlier theoretical and simulation studies fail to describe accurately current - to - frequency transduction in real neurons .such trasduction is well captured instead , away from saturation , by a threshold - linear function .such a function combines the threshold of real neurons , the linear behaviour typical of pyramidal neurons above threshold , and the accessibility to a full analytical treatment , as demonstrated here , too . for the sake of analytical feasibility ,however , we take the input units to be purely linear . therefore it should be kept in mind , in considering the final results that the threshold nonlinearity is only applied to the output units .in analogy to the model studied in we consider a set of input units which fire to an external continuous+discrete stimulus , parametrized by an angle and a discrete variable , with a gaussian distribution : ; \label{dist}\ ] ] is the firing rate in one trial of the input neuron , while the mean of the distribution , is written : where is a quenched random variable distributed between and , is the preferred direction for neuron . according to eq.([tuning_tot ] )neurons fire at an average firing rate which modulates with with amplitude , or takes a fixed value , independently of , with amplitude .we assume that quenched variables are uncorrelated and identically distributed across units and across the discrete correlates : ^{nk } \label{ro_eps}\ ] ] ^n=\frac{1}{(2\pi)^n}.\ ] ] in it has been shown that a cosinusoidal shaped function as in eq.([tuning_tot ] ) is able to capture the main features of directional tuning of real neurons in motor cortex .moreover it has been shown that the presence of negative firing rates in the distribution ( [ dist ] ) , which is not biologically plausible , does not alter information values , with respect to a more realistic choice for the firing distribution , in that it leads to the same curves except for a renormalization of the noise .output neurons are activated by input neurons via uncorrelated gaussian connection weights .each output neuron performs a linear summation of the inputs ; the outcome is distorted by a gaussian distributed noise and then thresholded , as in the following : ^+;\,\,\,\,i=1 .. m , j=1 ..n \label{output}\ ] ] in eq.([output ] ) is a threshold term , is a ( 0,1 ) binary variable , with mean , which expresses the sparsity or dilution of the connectivity matrix , and ^+=x\theta(x).\ ] ]we aim at estimating the mutual information between the output patterns of activity and the continuous+discrete stimuli : where the distribution is determined by the threshold linear relationship ( [ output ] ) , is given in eq.([dist ] ) and is a short notation for the average across the quenched variables ,,, and on the noise .we assume that the stimuli are equally likely : .eq.([info ] ) can be written as : with : \right\rangle_{\varepsilon,\vartheta^0,c , j,\delta}. \label{outent}\end{aligned}\ ] ] the analytical evaluation of the _ equivocation _ can be performed inserting eq.([xi_eta ] ) in the expression ( [ equiv ] ) , and using the replica trick to get rid of the logarithm : to take into account the threshold - linear relation ( [ output ] ) we consider the following equalities : inserting eq.([anna ] ) in eq.([replica ] ) one obtains : -1\right ) .\label{replica2}\end{aligned}\ ] ] the average across the quenched disorder ,, in eq.([replica2 ] ) can be performed in a very similar way as shown in : using the integral representation for each function , gaussian integration across , is standard ; the average on can be performed assuming large the number of input neurons .the final outcome for the _ equivocation _ reads : ^m -1\right ) , \label{replica3 } \end{aligned}\ ] ] where we have put .integration on is straightforward .integration on can be performed introducing auxiliary variables via functions expressed in their integral representation . considering the expression ( [ dist ] ) for the input distribution and with some rearrangement of the termsthe final result can be expressed as : ^m-1\right ) , \nonumber\end{aligned}\ ] ] where : the evaluation of the entropy of the responses , eq.([outent ] ) , can be carried out in a very similar way , introducing replicas in the continuous+discrete stimulus space .the final result reads : ^{n+1 } \left\langle e^{-\sum_{\alpha,\beta}(\delta_{\alpha\beta}-\sigma^{-1}_{\alpha\beta } ) \tilde{\eta}(\vartheta_\alpha , s_\alpha)\tilde{\eta}(\vartheta_\beta , s_\beta)/2\sigma^2 } \right\rangle_{\varepsilon,\vartheta^0}^n\right.\label{outent2}\\ & & \left.e^{-\frac{m}{2}tr \ln g } \left[\int_{-\infty}^0 \prod_\alpha \frac{d\xi^\alpha}{\sqrt{2\pi } } e^{-\sum_{\alpha,\beta}(\xi^\alpha-\xi_0 ) ( g^{-1}_{\alpha\beta}/2)(\xi^\beta-\xi_0)}+\int^{\infty}_0 \frac{d\xi}{(2\pi)^{\frac{n+1}{2 } } } e^{-\sum_{\alpha,\beta } ( g^{-1}_{\alpha\beta}/2)(\xi^-\xi_0)^2}\right]^m-1\right ) .\nonumber\end{aligned}\ ] ]the integrals in eq.([equiv2]),([outent2 ] ) can not be solved without resorting to an approximation . in analogy to what is used in , we use a saddle - point approximation ( which in general would be valid in the limit ) and we assume replica symmetry in the parameters , .this allows to explicitely invert and diagonalize the matrices , : the assumption of replica symmetry seems to have more subtle implications in the present situation .these will be discussed below . in replica symmetrythe mutual information can be expressed as follows : }\right.\nonumber\\ & & \left .- e^{n\left[(n+1)z_0^b\tilde{z}_0^b - n(n+1)z_1^b\tilde{z}_1^b- \frac{r}{2}\left(tr\ln g(z_0^b , z_1^b)+f(z_0^b , z_1^b)\right)-\frac{1}{2}tr\ln\sigma(\tilde{z}_0^b,\tilde{z}_1^b)- h^b(\tilde{z}_0^b,\tilde{z}_1^b)\right]}\right\ } , \label{info_lim}\end{aligned}\ ] ] with ; \label{f}\ ] ] ; \label{ha}\ ] ] ^{n+1 } \left\langle e^{-\sum_{\alpha,\beta}\left(\delta_{\alpha\beta}-\sigma^{-1}_{\alpha\beta}\right ) \tilde{\eta}(\vartheta_\alpha , s_\alpha)\tilde{\eta}(\vartheta_\beta , s_\beta)/2\sigma^2 } \right\rangle_{\varepsilon,\vartheta^0}^n\right ] .\label{hb}\ ] ] we have set and ,,, are the solutions of the saddle point equations : ;\nonumber\\ z_1^{a , b}&=&-\frac{1}{n}\frac{\partial}{\partial\tilde{z}_1}\left[\frac{1}{2 } tr\ln\sigma(\tilde{z}_0,\tilde{z}_1)+h^{a , b}(\tilde{z}_0,\tilde{z}_1)\right];\nonumber\\ \tilde{z}_0^{a , b}&=&\frac{\partial}{\partial z_0}\frac{r}{2}\left [ tr\ln g(z_0,z_1)+f(z_0,z_1)\right];\nonumber\\ \tilde{z}_1^{a , b}&=&-\frac{1}{n}\frac{\partial}{\partial z_1}\frac{r}{2}\left [ tr\ln g(z_0,z_1)+f(z_0,z_1)\right].\end{aligned}\ ] ] all the equations must be evaluated in the limit .it is easy to check that all terms in the exponent in eq.([info_lim ] ) are order .in fact , since when only one replica remains , one has : {|_{n=0}}.\end{aligned}\ ] ] therefore , from the saddle point equations , are order and is also order : since , it is easy to check by explicit evaluation that , when , all the diagonal terms among the matrix elements are order and all the out - of - diagonal terms are order . then all terms in the exponent of eqs.([ha]),([hb ] ) are order , and we can expand the exponentials , which allows us to perform the quenched averages across . considering the expression of , eq.([tuning_tot ] ) , one obtains : \right ) ; \label{ha_hb}\end{aligned}\ ] ] ^ 2\rangle_{\varepsilon,\vartheta^0}\nonumber\\ & = & ( \eta^0)^2\left[(a_2+\alpha^2 - 2\alpha a_1)\langle\varepsilon^2\rangle_{\varepsilon}+\alpha^2 + 2\alpha(a_1-\alpha)\langle\varepsilon\rangle_{\varepsilon}\right ] ; \label{lambda1}\end{aligned}\ ] ] ^ 2 \langle\tilde{\eta}(\vartheta_1,s_1)\tilde{\eta}(\vartheta_2,s_2 ) \rangle_{\varepsilon,\vartheta^0}\nonumber\\ & = & ( \eta^0)^2\left[(a_1-\alpha)^2\left(\frac{k-1}{k } \langle\varepsilon\rangle_{\varepsilon}^2+\frac{1}{k}\langle\varepsilon^2 \rangle_{\varepsilon}\right)+\alpha^2 + 2\alpha(a_1-\alpha ) \langle\varepsilon\rangle_{\varepsilon}\right ] ; \label{lambda2}\end{aligned}\ ] ] a similar expansion in for and for allows to derive explicitely the saddle point equations : \lambda_\eta^1+\frac{1}{\left(1 + 2\sigma^2\tilde{z}_1^b\right)^2}\lambda_\eta^2;\nonumber\\ \tilde{z}_1^{a , b}&=&-c\sigma^2_j\frac{r}{2}\left\{\sigma\left(\frac{\xi^0}{\sqrt{p+q}}\right)\frac{\xi^0}{\left(p+q\right)^{\frac{3}{2}}}-\frac{1}{p}{{\rm erf}}\left(\frac{\xi^0}{\sqrt{p+q}}\right)\right.\nonumber\\ & & \left.+\int_{-\infty}^{\infty } dt \left[1+\ln\left({{\rm erf}}\left(-\frac{\xi^0-t\sqrt{q}}{\sqrt{p}}\right)\right)\right]\sigma\left(\frac{\xi^0-t\sqrt{q}}{\sqrt{p}}\right)\frac{1}{p^\frac{3}{2}}\left[\xi^0-t\frac{q+p}{\sqrt{q}}\right]\right \};\nonumber\\ \label{saddlepoint}\end{aligned}\ ] ] where : from the expression of in eq.([saddlepoint ] ) , it is easy to verify that the dependence on in eq.([ha_hb ] ) , which might affect the information in eq.([info_lim ] ) , cancels out with the products , which should contribute to the information in the limit ( see eq.([info_lim ] ) ) . therefore , since is known and depends only on , the mutual information can be expressed as a function of , , which in turn are to be determined self - consistently by the saddle point equations .the average information per input cell can be written , finally : + \gamma_2^b(\tilde{z}_1^b)-\gamma_2^a(\tilde{z}_1^a)\right\ } , \label{final_info}\ ] ] with ;\end{aligned}\ ] ] .\ ] ] the expression for the mutual information only contains terms linear in either or .since the last of the saddle - point equations , ( [ saddlepoint ] ) , contains , if one fixes and increases the information grows non - linearly , because the position of the saddle point varies .it turns out that , as shown below , the growth is only very weakly sublinear , at least when .analogously , fixing and varying we would find a non - linearity due to the -dependence of the saddle point .if is fixed and and grow together , the information rises purely linearly . what our analytical treatment misses out , however , is the nonlinearity required to appear as the mutual information approaches its ceiling , the entropy of the stimulus set .the approach to this saturating value was described at the input stage , where also the initial linear rise ( in ) was obtained in the large noise limit .therefore , our saddle point method is in same sense similar to taking a large ( input ) noise limit , , to its leading ( order ) term .it is possible that the saddle point method could be extended , to account also for successive terms in a large noise expansion .this would probably require integrating out the fluctuations around the saddle point , but by carefully analysing the relation of different replicas to different values of the quenched variables .we leave this possible extension to future work . the present calculation , therefore ,although employing a saddle point method which is usually applicable for large and , should be considered effectively as yielding the initial linear rise in the mutual information , the one observed with small .eq.([saddlepoint ] ) for has been solved numerically using a matlab code .convergence to self - consistency has been found already after iterations with an error lower than .fig.[fig1 ] shows the mutual information as a function of the output population size , for an input population size equal to cells .this is contrasted with the information in the input units , about exactly the same set of correlates , calculated as in , by keeping only the leading ( linear ) term in .in fact , in the mutual information carried by a finite population of neurons firing according to eq.([dist ] ) had been evaluated analytically , in the limit of large noise , by means of an expansion in . to linear order in analytical expression for the information carried by input neurons reads : where , are defined , again , as in eqs.([lambda1 ] ) , ( [ lambda2 ] ) . in analogy to what had been done in have set . as evident from the graph ,also the output information is essentially linear up to a value of , and quasi - linear even for .it should be remined , again , that our saddle point method only takes into account the term linear in in the information _ input _ units carry about the stimulus .it is not possible , therefore , for eq.([final_info ] ) to reproduce the saturation in the mutual information as it approaches the entropy of the stimulus set ( which is finite , if one considers only discrete stimuli ) .the nearly linear behaviour in thus reflects the linear behaviour in induced , in the intermediate quantity ( the information available at the input stage ) , by our saddle point approximate evaluation . as it is clear from the comparison in fig.[fig1 ] ,when the two populations of units are affected by the same noise the input information is considerably higher than the output one .this is expected , since output and input noise sum up while influencing the firing of output neurons , but also because the input distribution is taken to be a pure gaussian , while the output rates are affected by a threshold .if the input - output tranformation were linear and the output noise much smaller than the input one , one would expect that output and input units would carry the same amount of information .briefly , in a linear network with zero output noise one has : considering eqs.([xi_eta]),([dist ] ) , an _ effective _ expression for the distribution can be obtained by direct integration of the functions via their integral representation , on : this distribution is then used to evaluate both the equivocation , eq.([equiv ] ) , and the entropy of the responses , eq.([outent ] ) .we do not report the calculation , that is straightforward and analogous to the one reported in .the final result , which is valid for a finite population size , and up to the linear approximation in , is analogous to eq.([input ] ) : thus , we expect that taking the limits and simultaneously in eq.([final_info ] ) , we should get to the same result : the output information should equal the input one when grows large . from eq.([final_info ] ) it is easy to show that : ;\ ] ] when one obtains exactly the linear limit , eq.([asymptot ] ) .we have verified this analytical limit by studying numerically the approach to the asymptotic value of the mutual information .fig.[fig3 ] shows the dependence of output information on the output noise , for 4 different choices of the ( reciprocal of the ) threshold , .a large value , , implies linear output units . as expected , the output information , which always grows for decreasing values of the output noise , for approaches asymptotically the input information .for increasing values of the output noise , the information vanishes with a typical sigmoid curve , with its point of inflection when the output matches the input noise .we have then examined how the information in output ( compared to the input ) depends on the number of discrete correlates and on the width of the tuning function ( [ tuning2 ] ) , parametrized by , with respect to the continuous correlate .fig.[fig4 ] shows a comparison between input and output information for a sample of 10 cells , as a function of .both curves quickly reach an asymptotic value , obtained by setting in eq.([lambda2 ] ) for .the relative information loss in output is roughly constant with .a comparison is shown with the case where correlates are purely discrete , which is obtained by setting in eq.([tuning2 ] ) .the curves exhibit a similar behaviour , even if the rise with is steeper , and the asymptotic values are higher .this may be surprising , but it is in fact a consequence of the specific model we have considered , eq.([tuning_tot ] ) , where a unit has the same tuning curve to each of the discrete correlates , only varying its amplitude with respect to a value constant in the angle . as ,most of the mutual information is about the discrete correlates , and the tuning to the continuous dimension , present for , effectively adds noise to the discrimination among discrete cases , noise which is not present for . with respect to the continuous dimension, the selectivity of the input units can be increased by varying the power of the cosine from 0 ( no selectivity ) through 1 ( very distributed encoding , as for the discrete correlates ) to higher values ( progressively narrower tuning functions ) .fig.[fig5 ] reports the resulting behaviour of the information in input and in output , for the case ( only a continuous correlate ) and ( continuous+discrete correlates ) .increasing selectivity implies a `` sparser '' representation of the angle , the continuous variable , and hence less information , on average .however if the correlate is purely continuous there is an initial increase , before reaching the optimal sparseness . it should be kept in mind , again , that the asymptotic equality of the and cases is a consequence of the specific model , eq.([tuning_tot ] ) , which assigns the same preferred angle to each discrete correlate .the resolution with which the continuous dimension can be discriminated does not , within this model , improve with larger , while the added contribution , of being able to discriminate among discrete correlates , decreases in relative importance as the tuning becomes sharper .figures [ fig4 ] and [ fig5 ] show that , as long as the output noise is non zero and the threshold is finite , information is lost going from input to output , but the information loss does not appear to depend on the structure and on the dimensionality of the correlate . note that , while the purely continuous case has been easily obtained by setting in the expression of , eq.([lambda2 ] ) , for the purely discrete case it is enough to set .we have attempted to clarify how information about multi - dimensional stimuli , with both a continuous and a discrete dimension , is transmitted from a population of units with a known coding scheme , down to the next stage of processing .previous studies had focused on the mutual information between input and output units in a two - layer threshold - linear network either with learning or with simple random connection weights .more recent investigations have tried to quantify the efficiency of a population of units in coding a set of discrete or continuous correlates .the analysis in has been then generalized to the more realistic case of multi - dimensional continuous+discrete correlates .this work correlates with both research streams , in an effort to define a unique conceptual framework for population coding .the main difference with the second group of studies is obviously the presence of the network linking input to output units .the main difference with the first two papers , instead , is the analysis of a distinct mutual information quantity : not between input and output units , but between correlates ( `` stimuli '' ) and output units . in had been argued , for a number of purely discrete correlates , that the information _ about _ the stimuli reduces to the information about the `` reference '' neural activity when .the reference activity is simply the mean response to a given stimulus when the information is measured from the variable , noisy responses around that means ; or it can be taken to be the stored pattern of activity , when the retrieval of such patterns is considered , as in .true , the information about the stimuli saturates at the entropy of the stimulus set , but for this entropy diverges , only the linear term in is relevant , and the two quantities , information about the stimuli and information about the reference activity , coincide .our present saddle point calculation is only able to capture , effectively , the mutual information which is linear in the number of input units , as mentioned above .it fails to describe the approach to the saturating value , the entropy of the set of correlates , be this finite or infinite .therefore , ours is close to a calculation of the information about a reference activity - in our case , the activity of the input units .the remaining difference is that we can take into account , albeit solely in the linear term , the dependence on ( through the equation for , eq.([lambda2 ] ) ) , without having to take the further limit . due to the presence of a threshold and of a non zero output noisethe information in output is lower than that in input , and we have shown analytically that in the limit of a noiseless , linear input - output transfer function the ouptput information tends asymptotically to the input one .we have not , however , introduced a threshold in the input units , which would be necessary for a fair comparison . in an independent line of research , recent work also quantified the contribution to the mutual information , in a different model , of cubic and higher order non - linearities in the transfer function , by means of a diagrammatic expansion in a noise parameter . in has been shown that the effect of a threshold in the input units on the input information results merely in a renormalization of the noise .the resulting effect on the output information remains to be explored , possibly with similar methods .considering mixed continuous and discrete dimensions in our stimulus set , we had been wondering whether the information loss in output depended on the presence or absence of discrete or continuous dimensions in the stimulus structure .we have shown that for a fixed , finite level of noise this loss dose not depend significantly on the structure of the stimulus , but solely on the relative magnitude of input and output noise , and on the position of the output threshold .a recent work has shown that the interplay between short and long range connectivities in the hopfield model leads to a deformation of the phase diagram with the appearence of novel phases .it would be interesting to introduce short and long range connections in our model , and to examine how the coding efficiency of output neurons depends on the interaction between short and long range connections. this will be the object of future investigations . | in a previous report we have evaluated analytically the mutual information between the firing rates of independent units and a set of multi - dimensional continuous+discrete stimuli , for a finite population size and in the limit of large noise . here , we extend the analysis to the case of two interconnected populations , where input units activate output ones via gaussian weights and a threshold linear transfer function . we evaluate the information carried by a population of output units , again about continuous+discrete correlates . the mutual information is evaluated solving saddle point equations under the assumption of replica symmetry , a method which , by taking into account only the term linear in of the input information , is equivalent to assuming the noise to be large . within this limitation , we analyze the dependence of the information on the ratio , on the selectivity of the input units and on the level of the output noise . we show analytically , and confirm numerically , that in the limit of a linear transfer function and of a small ratio between output and input noise , the output information approaches asymptotically the information carried in input . finally , we show that the information loss in output does not depend much on the structure of the stimulus , whether purely continuous , purely discrete or mixed , but only on the position of the threshold nonlinearity , and on the ratio between input and output noise . |
the problem of `` scaling up for high dimensional data and high speed data streams '' is among the `` ten challenging problems in data mining research'' .this paper is devoted to estimating entropy of data streams using a recent algorithm called _ compressed counting ( cc ) _this work has four components : ( 1 ) the theoretical analysis of entropies , ( 2 ) a much improved estimator for cc , ( 3 ) the bias and variance in estimating entropy , and ( 4 ) an empirical study using web crawl data .while traditional data mining algorithms often assume static data , in reality , data are often constantly updated .mining data streams in ( e.g. , ) 100 tb scale databases has become an important area of research , e.g. , , as network data can easily reach that scale .search engines are a typical source of data streams .we consider the _ turnstile _ stream model .the input stream , ] may record the total number of items that user has ordered up to time and denotes the number of items that this user orders ( ) or cancels ( ) at . if each user is identified by the ip address , then potentially .it is often reasonable to assume \geq 0 ] results in the _ strict - turnstile_ model , which suffices for describing almost all natural phenomena .for example , in an online store , it is not possible to cancel orders that do not exist . *compressed counting ( cc ) * assumes a _ relaxed strict - turnstile _model by only enforcing \geq0 ] can be arbitrary .the frequency moment is a fundamental statistic : ^\alpha .\end{aligned}\ ] ] when , is the sum of the stream .it is obvious that one can compute exactly and trivially using a simple counter , because = \sum_{s=0}^t i_s ] , to .it is possible that ( ipv6 ) if one is interested in measuring the traffic streams of unique source or destination .the distributed denial of service ( * ddos * ) attack is a representative example of network anomalies .a ddos attack attempts to make computers unavailable to intended users , either by forcing users to reset the computers or by exhausting the resources of service - hosting sites . for example , hackers may maliciously saturate the victim machines by sending many external communication requests. ddos attacks typically target sites such as banks , credit card payment gateways , or military sites . a ddos attack changes the statistical distribution of network traffic .therefore , a common practice to detect an attack is to monitor the network traffic using certain summary statics .since shannon entropy is a well - suited for characterizing a distribution , a popular detection method is to measure the time - history of entropy and alarm anomalies when the entropy becomes abnormal .entropy measurements do not have to be `` perfect '' for detecting attacks .it is however crucial that the algorithm should be computationally efficient at low memory cost , because the traffic data generated by large high - speed networks are enormous and transient ( e.g. , 1 gbits / second ) .algorithms should be real - time and one - pass , as the traffic data will not be stored .many algorithms have been proposed for `` sampling '' the traffic data and estimating entropy over data streams , in high - speed networks , anomaly events including network failures and ddos attacks may not always be detected by simply monitoring the traditional traffic matrix because the change of the total traffic volume is sometimes small .one strategy is to measure the entropies of all origin - destination ( od ) flows .an od flow is the traffic entering an ingress point ( origin ) and exiting at an egress point ( destination ) . showed that measuring entropies of od flows involves measuring the intersection of two data streams , whose moments can be decomposed into the moments of individual data streams ( to which cc is applicable ) and the moments of the absolute difference between two data streams .the recent work was devoted to estimating the shannon entropy of msn search logs , to help answer some basic problems in web search , such as , _ how big is the web ?_ the search logs can be viewed as data streams , and analyzed several `` snapshots '' of a sample of msn search logs .the sample used in contained 10 million , url , ip triples ; each triple corresponded to a click from a particular ip address on a particular url for a particular query . drew their important conclusions on this ( hopefully ) representative sample .alternatively , one could apply data stream algorithms such as cc on the whole history of msn ( or other search engines ) .a workshop in nips03 was denoted to entropy estimation , owing to the wide - spread use of shannon entropy in neural computations .( http://www.menem.com/~ilya/pages/nips03 ) for example , one application of entropy is to study the underlying structure of spike trains . because the elements , ] is the same at the end of the stream , regardless whether it is collected at once ( i.e. , static ) or incrementally ( i.e. , dynamic ) .ten english words are selected from a chunk of web crawl data with pages .the words are selected fairly randomly , except that we make sure they cover a whole range of sparsity , from function words ( e.g. , a , the ) , to common words ( e.g. , friday ) to rare words ( e.g. , twist ) .the data are summarized in table [ tab_data ] .l l l l l l l + word & nonzero & & & & & + + twist & 274 & 5.4873 & 5.4962 & 5.4781 & 6.3256 & 4.7919 + rice & 490 & 5.4474 & 5.4997 & 5.3937 & 6.3302 & 4.7276 + friday & 2237 & 7.0487 & 7.1039 & 6.9901 & 8.5292 & 5.8993 + fun & 3076 & 7.6519 & 7.6821 & 7.6196 & 9.3660 & 6.3361 + business & 8284 & 8.3995 & 8.4412 & 8.3566 & 10.502 & 6.8305 + name & 9423 & 8.5162 & 9.5677 & 8.4618 & 10.696 & 6.8996 + have & 17522 & 8.9782 & 9.0228 & 8.9335 & 11.402 & 7.2050 + this & 27695 & 9.3893 & 9.4370 & 9.3416 & 12.059 & 7.4634 + a & 39063 & 9.5463 & 9.5981 & 9.4950 & 12.318 & 7.5592 + the & 42754 & 9.4231 & 9.4828 & 9.3641 & 12.133 & 7.4775 + [ tab_data ] + figure [ fig_entropy ] selects two words to compare their shannon entropies , rny entropies , and tsallis entropies .clearly , although both approach shannon entropy , rny entropy is much more accurate than tsallis entropy .this section presents two lemmas , proved in the appendix .lemma [ lem_bias ] says rnyi entropy has smaller bias than tsallis entropy for estimating shannon entropy .[ lem_bias ] lemma [ lem_bias ] does not say precisely how much better . note that when , the magnitudes of and are largely determined by the first derivatives ( slopes ) of and , respectively , evaluated at .lemma [ lem_limit ] directly compares their first and second derivatives , as .[ lem_limit ] as , lemma [ lem_limit ] shows that in the limit , , verifying that should have smaller bias than . also , .two special cases are interesting . in this case , for all .it is easy to show that regardless of .thus , when the data distribution is close to be uniform , rnyi entropy will provide nearly perfect estimates of shannon entropy . in web and nlp applications ,the zipf distribution is common : ., .the curves largely overlap and hence we do not label the curves ., width=153 ] figure [ fig_ratio ] plots the ratio , . at ( which is common ) ,the ratio is about , meaning that the bias of rnyi entropy could be a magnitude smaller than that of tsallis entropy , in common data sets .compressed counting ( cc ) assumes the _ relaxed strict - turnstile _ data stream model .its underlying technique is based on _ maximally - skewed stable random projections_. a random variable follows a maximally - skewed -stable distribution if the fourier transform of its density is where , , and .we denote . the skewness parameter for general stable distributions ranges in $ ] ; but cc uses , i.e. , * maximally - skewed*. previously , the method of _ symmetric stable random projections_ used .consider two independent variables , .for any non - negative constants and , the `` -stability '' follows from properties of fourier transforms : note that if , then the above stability holds for any constants and .we should mention that one can easily generate samples from a stable distribution .conceptually , one can generate a matrix and multiply it with the data stream , i.e. , .the resultant vector is only of length .the entries of , , are i.i.d .samples of a stable distribution . by property of fourier transforms ,the entries of , to , are i.i.d .samples of a stable distribution = \sum_{i=1}^d r_{ij } a_t[i]%\\ \sim s\left(\alpha,\beta=1,f_{(\alpha ) } = \sum_{i=1}^d a_t[i]^\alpha\right),\end{aligned}\ ] ] whose scale parameter is exactly the moment . thus , cc boils down to a statistical estimation problem . for real implementations , one should conduct incrementally .this is possible because the _ turnstile _ model ( [ eqn_turnstile ] ) is a linear updating model .that is , for every incoming , we update for to . entries of are generated on - demand as necessary . commented that , when is large , generating entries of on - demand and multiplications , to , can be prohibitive .an easy `` fix '' is to use as small as possible , which is possible with cc when . at the same , all procedures of cc and _ symmetric stable random projections _are the same except the entries in follow different distributions .however , since cc is much more accurate especially when , it requires a much smaller at the same level of accuracy .cc boils down to estimating from i.i.d .samples . provided two estimators .the asymptotic ( i.e. , as ) variance is as , the asymptotic variance approaches zero . which is asymptotically unbiased and has variance is defined only for and is considerably more accurate than the geometric mean estimator .the two estimators for cc in dramatically reduce the estimation variances compared to _symmetric stable random projections_. they are , however , are not quite adequate for estimating shannon entropy using small ( ) samples .we discover that an estimator based on the _ sample quantiles _ considerably improves when .given i.i.d samples , we define the -quantile is to be the smallest of . for example , when , then quantile is the smallest among s . to understand why the quantile works ,consider the normal , which is a special case of stable distribution with .we can view , where .therefore , we can use the ratio of the quantile of over the -th quantile of to estimate . note that corresponds to , not .assume , to .one can sort and use the smallest as the estimate , i.e. , denote , where .denote the probability density function of by , the probability cumulative function by , and the inverse by .the asymptotic variance of is presented in lemma [ lem_q_var ] , which follows directly from known statistics results , e.g. , ( * ? ? ?* theorem 9.2 ) .[ lem_q_var ] we can then choose to minimize the asymptotic variance .we denote the optimal quantile estimator by .the optimal quantiles , denoted by , has to be determined by numerically and tabulated ( as in table [ tab_oq ] ) , because the density functions do not have an explicit closed - form .we used the * fbasics * package in * r*. we , however , found the implementation of those functions had numerical problems when and .table [ tab_oq ] provides the numerical values for , ( [ eqn_w ] ) , and the variance of ( without the term ) . .in order to use the optimal quantile estimator , we tabulate the constants and . [ cols="<,<,<,<",options="header " , ] [ tab_oq ] figure [ fig_comp_var_factor ] ( left panel ) compares the variances of the three estimators for cc . to better illustrate the improvements , figure [ fig_comp_var_factor ] ( right panel ) plots the ratios of the variances .when , the _ optimal _ quantile reduces the variances by a factor of 70 ( compared to the _ geometric mean _estimator ) , or 20 ( compared to the _ harmonic mean _estimator ) .this section analyzes the biases and variances in estimating shannon entropy .also , we provide the criterion for choosing the sample size .we use , , and to denote generic estimators .since is ( asymptotically ) unbiased , and are also asymptotically unbiased .the asymptotic variances of and can be computed by taylor expansions : we use and to denote the estimators for shannon entropy using the estimated and , respectively .the variances remain unchanged , i.e. , however , and are no longer ( asymptotically ) unbiased , because the biases arise from the estimation biases and diminish quickly as increases .however , the `` intrinsic biases , '' and , can not be reduced by increasing ; they can only be reduced by letting close to 1 .the total error is usually measured by the mean square error : mse = bias + var .clearly , there is a variance - bias trade - off in estimating using or .the optimal is data - dependent and hence some prior knowledge of the data is needed in order to determine it .the prior knowledge may be accumulated during the data stream process .experiments on real data ( i.e. , table [ tab_data ] ) can further demonstrates the effectiveness of compressed counting ( cc ) and the new _ optimal quantile _ estimator .we could use static data to verify cc because we only care about the estimation accuracy , which is same regardless whether the data are collected at one time ( static ) or dynamically .we present the results for estimating frequency moments and shannon entropy , in terms of the normalized mses .we observe that the results are quite similar across different words ; and hence only one word is selected for the presentation .figure [ fig_rice_f ] provides the normalized mses ( by ) for estimating the frequency moments , , for word rice : * the errors of the three estimators for cc decrease ( to zero , potentially ) as .the improvement of cc over _ symmetric stable random projections _ is enormous . *the optimal quantile estimator is in general more accurate than the geometric mean and harmonic mean estimators near .however , for small and , exhibits bad behaviors , which disappear when . *the theoretical asymptotic variances in ( [ eqn_f_gm_var ] ) , ( [ eqn_f_hm_var ] ) , and table [ tab_oq ] are accurate .+ figure [ fig_rice_hr ] provides the mses from estimating the shannon entropy using the rnyi entropy , for word rice : * using _ symmetric stable random projections _ with close to 1 is not a good strategy and not practically feasible because the required sample size is enormous .* there is clearly a variance - bias trade - off , especially for the _ geometric mean _ and _ harmonic mean _ estimators .that is , for each , there is an `` optimal '' which achieves the smallest mse .* using the _ optimal quantile _ estimator does not show a strong variance - bias trade - off , because its has very small variance near and its mses are mainly dominated by the ( intrinsic ) biases , .+ figure [ fig_rice_ht ] presents the mses for estimating shannon entropy using tsallis entropy .the effect of the variance - bias trade - off for geometric mean and harmonic mean estimators , is even more significant , because the ( intrinsic ) bias is much larger .web search data and network data are naturally data streams .the entropy is a useful summary statistic and has numerous applications , e.g. , network anomaly detection . efficiently and accurately computing the entropy in large and frequently updating data streams , in one - pass , is an active topic of research .a recent trend is to use the frequency moments with to approximate shannon entropy .we conclude : * we should use rnyi entropy to approximate shannon entropy .using tsallis entropy will result in about a magnitude larger bias in a common data distribution . *the _ optimal quantile _estimator for cc reduces the variances by a factor of 20 or 70 when , compared to the estimators in . * when _ symmetric stable random projections _ must be used, we should exploit the variance - bias trade - off , by not using very close 1 . and . note that , if and if . for , always holds , with equality when .therefore , when and when .also , we know . therefore , to show , it suffices to show that both and are decreasing functions of .taking the first derivatives of and yields to show , it suffices to show that .taking derivative of yields , i.e. , if and if . because , we know .this proves . to show , it suffices to show that , where note that and hence we can view as probabilities . since is a concave function , we can use jensen s inequality : , to obtain , using lhopital s rule ^\prime}{\left[(\alpha-1)^2\right]^\prime}\\\notag = & \lim_{\alpha\rightarrow1 } \frac{- ( \alpha-1)\sum_{i=1}^d p_i^\alpha \log^2p_i}{2(\alpha-1)}=-\frac{1}{2}\sum_{i=1}^d p_i \log^2 p_i.\end{aligned}\ ] ] note that , as , but . again , applying lhopital s rule yields the expressions for and ^\prime}{\left[(\alpha-1)^2\sum_{i=1}^dp_i^\alpha\right]^\prime}\\\notag = & \lim_{\alpha\rightarrow 1 } \frac{\left[\sum_{i=1}^dp_i^\alpha\log p_i \log \sum_{i=1}^dp_i^\alpha - ( \alpha-1)\sum_{i=1}^d p_i^\alpha \log^2 p_i\right]}{\left[2(\alpha-1)\sum_{i=1}^dp_i^\alpha + ( \alpha-1)^2\sum_{i=1}^dp_i^\alpha\log p_i \right]}\\\notag = & \lim_{\alpha\rightarrow 1 } \frac{\left(\sum_{i=1}^dp_i^\alpha\log p_i\right)^2/\sum_{i=1}^dp_i^\alpha - \sum_{i=1}^d p_i^\alpha \log^2 p_i + \text{negligible terms}}{2\sum_{i=1}^dp_i^\alpha + \text{negligible terms}}\\\notag = & \frac{1}{2}\left(\sum_{i=1}^dp_i\log p_i\right)^2 -\frac{1}{2}\sum_{i=1}^d p_i \log^2 p_i\end{aligned}\ ] ] | the shannon entropy is a widely used summary statistic , for example , network traffic measurement , anomaly detection , neural computations , spike trains , etc . this study focuses on estimating shannon entropy of data streams . it is known that shannon entropy can be approximated by rnyi entropy or tsallis entropy , which are both functions of the frequency moments and approach shannon entropy as . * * _ compressed counting ( cc)_** is a new method for approximating the frequency moments of data streams . our contributions include : * we prove that rnyi entropy is ( much ) better than tsallis entropy for approximating shannon entropy . * we propose the _ optimal quantile _ estimator for cc , which considerably improves the estimators in . * our experiments demonstrate that cc is indeed highly effective in approximating the moments and entropies . we also demonstrate the crucial importance of utilizing the variance - bias trade - off . |
in the recent years , the statistical physics techniques have been successfully applied in the description of socioeconomic phenomena . among the studied problems we can cite opinion dynamics , language evolution , biological aging , dynamics of stock markets , earthquakes and many others .these interdisciplinary topics are usually treated by means of computer simulations of agent - based models , which allow us to understand the emergence of collective phenomena in those systems .recently , the impact of nonconformity in opinion dynamics has atracted attention of physicists .anticonformists are similar to conformists , since both take cognizance of the group norm .thus , conformers agree with the norm , anticonformers disagree . on the other hand, we have the independent behavior , where the individual tends to resist to the groups influence . as discussed in , independence is a kind of nonconformity , and it acts on an opinion model as a kind of stochastic driving that can lead the model to undergo a phase transition . in fact , independence plays the role of a random noise similar to social temperature . in this workwe study the impact of independence on agents behavior in a kinetic exchange opinion model . for this purpose, we introduce a probability of agents to make independent decisions .our analytical results and numerical simulations show that the model undergoes a phase transition at critical points that depend on another model parameter , related to the agents flexibility .this work is organized as follows . in section 2we present the microscopic rules that define the model and in section 3 the numerical and analytical results are discussed .finally , our conclusions are presented in section 4 .our model is based on kinetic exchange opinion models ( keom ) .a population of agents is defined on a fully - connected graph , i.e. , each agent can interact with all others , which characterizes a mean - field - like scheme .in addition , each agent carries one of three possible opinions ( or states ) , namely , or .the following microscopic rules govern the dynamics : 1 .an agent is randomly chosen ; 2 . with probability , this agent will act independently . in this case , with probability he / she chooses the opinion , with probability he / she adopts the opinion and with probability he / she chooses the opinion ; 3 . on the other hand , with probability we choose another agent , say , at random , in a way that will influence .thus , the opinion of the agent in the next time step will be updated according to \,,\ ] ] where the sign function is defined such that . in the case where the agent does not act independently , the change of his / her state occur according to a rule similar to the one proposed recently in a keom .notice , however , that in ref . two randomly chosen agents and interact with competitive couplings , i.e. , the kinetic equation of interaction is $ ] . in this case, the couplings are random variables presenting the value ( ) with probability ( ) . in other words ,the parameter denotes the fraction of negative interactions . in this case , the model of ref . undergoes a nonequilibrium phase transition at . in the absence of negative interactions ( ), the population reaches consensus states with all opinions or .thus , our eq .( [ eq1 ] ) represents the keom of ref . with no negative interactions , and the above parameter can be related to the agents flexibility . in this case , for ( no independence ) all stationary states will give us , where is the order parameter of the system , and denotes a disorder or configurational average taken at steady states . the eq .( [ eq2 ] ) defines the `` magnetization per spin '' of the system .we will show by means of analytical and numerical results that the independent behavior works as a noise that induces a phase transition in the keom in the absence of negative interactions .the three states considered in the model can be interpreted as follows .we have a population of voters that can choose among two candidates a and b. thus , the opinions represent the intention of an agent to vote for the candidate a ( opinion ) , for the candidate b ( opinion ) , or the agent may be undecided ( opinion ) . in this case ,notice that there is a difference among the undecided and independent agents .an agent that decide to behave independently ( with probability ) can make a decision to change or not his / her opinion based on his / her own conviction , whatever is the his / her current state ( decided or undecided ) .in other words , an interaction with an agent is not required . on the other hand ,an undecided agent can change his / her opinion in two ways : due to an interaction with a decided agent ( following the rule given by eq . ( [ eq1 ] ) , with probability ) or due to his / her own decision to do that ( independently , with probability ) . regarding the independent behavior , one can consider the homogeneous case ( ) and the heterogeneous one ( ) .these cases will be considered separately in the next section .one can start studying the homogeneous case . in this case, we have that all probabilities related to the independent behavior , namely and , are equal to .thus , the probability that an agent chooses a given opinion , or independently of the opinions of the other agents is . for the analysis of the model , we have considered the order parameter defined by eq .( [ eq2 ] ) , as well as the susceptibility and the binder cumulant , defined as ( a ) , order parameter ( b ) and susceptibility ( c ) as functions of the independence probability for the homogeneous case ( ) and different population sizes . in the inset we exhibit the corresponding scaling plots .the estimated critical quantities are , , and .results are averaged over , , and samples for and , respectively.,title="fig:",scaledwidth=45.0% ] ( a ) , order parameter ( b ) and susceptibility ( c ) as functions of the independence probability for the homogeneous case ( ) and different population sizes . in the insetwe exhibit the corresponding scaling plots .the estimated critical quantities are , , and .results are averaged over , , and samples for and , respectively.,title="fig:",scaledwidth=45.0% ] + ( a ) , order parameter ( b ) and susceptibility ( c ) as functions of the independence probability for the homogeneous case ( ) and different population sizes . in the insetwe exhibit the corresponding scaling plots .the estimated critical quantities are , , and .results are averaged over , , and samples for and , respectively.,title="fig:",scaledwidth=45.0% ] notice that the binder cumulant defined by eq .( [ eq4 ] ) is directly related to the order s parameter _ kurtosis _ , that can be defined as .the initial configuration of the population is fully disordered , i.e. , we started all simulations with an equal fraction of each opinion ( for each one ) .in addition , one time step in the simulations is defined by the application of the rules defined in the previous section times . in fig .[ fig1 ] we exhibit the quantities of interest as functions of for different population sizes .all results suggest the typical behavior of a phase transition . in order to estimate the transition point ,we look for the crossing of the binder cumulant curves for the different sizes . from fig .[ fig1 ] ( a ) , the estimated value is , which agrees with the analytical prediction [ see eq .( [ qc_sym ] ) of the appendix ] .in addition , in order to determine the critical exponents associated with the phase transition we performed a finite - size scaling ( fss ) analysis .we have considered the standard scaling relations , that are valid in the vicinity of the transition .thus , we exhibit in the insets of fig .[ fig1 ] the scaling plots of the quantities of interest ( , and ) .our estimates for the critical exponents are , and .notice that the critical probability , , presents the same value of the critical fraction of negative interactions ( ) of the keom of ref . .in addition , the critical exponents are the same in the two formulations of the model .thus , the inclusion of the independent behavior with equal probabilities ( i.e. , ) produces a similar effect to the introduction of negative interactions in the keom of ref . . as a function of for and typical values of .one can see that the transition points depend on .the inset shows the region near .results are averaged over simulations.,scaledwidth=45.0% ] one can also consider the general case where . in this case , for an agent that act independently , the probabilities to choose the three possible opinions are different . as in the previous subsection, we started all simulations with an equal fraction of each opinion . in fig .[ fig2 ] we show the order parameter as a function of for typical values of and population size .one can see that the phase transition occurs for all values of exhibited in fig .[ fig2 ] , and the critical points depend on , i.e. , we have . furthermore , another interesting result that one can see in fig . [ fig2 ] is that for the order parameter goes exactly to at , presenting no finite - size effects as the other curves do , as can be easily seen in the inset of fig .this fact can be easily understood .indeed , for all agents that behave independently choose the opinion .thus , for a sufficiently large value of all agents will change independently to . in this case , eq .( [ eq2 ] ) give us an order parameter .this qualitative discussion can be confirmed by analytical considerations ( see the appendix ) . of samples ( over simulations ) that reaches the consensus as a function of for and typical population sizes ( main plot ) .in the inset it is exhibited the corresponding scaling plot .the best collapse of data was obtained for and .,scaledwidth=45.0% ] thus , the case is special , because all agents change their opinions to for a sufficient large value of the parameter . indeed ,if all agents are in the state , the evolution equation ( [ eq1 ] ) , when applied ( with probability ) , does not change the opinions to or anymore , which means that the system is in an absorbing state .this fact , together with the absence of finite - size effects for the order parameter defined in eq .( [ eq2 ] ) , suggests that one can not apply the scaling relations ( [ eq5 ] ) - ( [ eq8 ] ) for . in this case , it is better to analyze other quantity as an order parameter , as was done , for example , for the 2d sznajd model .thus , following , we performed several simulations of the system for and we measured the fraction of samples that reached the absorbing state with all opinions as a function of .the result is exhibited in fig .[ fig3 ] for typical values of , and in this case this order parameter strongly depends on the system size .considering scaling relations in a similar way as in ref . , i.e. , plotting as a function of the variable , one obtains , in agreement with the previous discussion , and .the corresponding data collapse is exhibited in the inset of fig .[ fig3 ] . as above discussed, the numerical results suggest that critical points depend on .this picture is confirmed by the analytical solution of the model , which give us ( see eq .( [ qcs ] ) of the appendix ) \ , .\ ] ] notice that the above solution give us , and the exact result for is , which agrees with the above discussion .we performed a fss analysis based on eqs .( [ eq5 ] ) - ( [ eq8 ] ) in order to obtain the critical points and the critical exponents for other values of . in fig .[ fig4 ] the eq .( [ eq9 ] ) is plotted together with all numerical estimates of .one can see that the numerical results agree very well with the analytical prediction .in addition , the critical exponents are the same for all values of , i.e. , we have , and , which indicates a universality on the order - disorder frontier of the model , except on the `` special '' point . versus , separating the ordered and the disordered phases .the symbols are the numerical estimates of the critical points , whereas the full line is the analytical prediction , eq .( [ eq9 ] ) .the open ( blue ) circle denotes the special case , as discussed in the text .the error bars determined by the fss analysis are smaller than data points.,scaledwidth=50.0% ]in this work we introduce the mechanism of independence in a three - state ( , and ) kinetic exchange opinion model . in the absence of negative interactions ,this model always evolve to ordered ( consensus ) states .our results show that independence acts as a noise , inducing a nonequilibrium phase transition in the model , and that the critical points depend on the agents flexibility .the numerical simulations suggest that we have the same critical exponents for all values of , i.e. , we have , and , which indicates a universality on the order - disorder frontier of the model .this is an expected result , due to the mean - field character of the interactions . on the other hand , the case is special , and the system undergoes a phase transition to an absorbing state with all agents in the undecided state .following the lines of refs . , we computed the critical values of the probability .we first obtained the matrix of transition probabilities whose elements furnish the probability that a state suffers the shift or change .let us also define , and , the stationary probabilities of each possible state . in the steady state , the fluxes into and out from a given state must balance . in particular , for the null state , one has , when the order parameter vanishes , it must be .finally , let us define , with , the probability that the state shift per unit time is , that is , .in the steady state , the average shift must vanish , namely , + r(1)-r(-1)=0 \,.\ ] ] for the more general case considering the flexibility parameter , the elements of the transition matrix are first , one can consider the homogeneous case . in this case , the above elements are simplified , and the null state condition ( [ nullstate ] ) give us ( disorder condition ) .thus , the null average shift condition ( [ nullshift ] ) , together with the above disorder condition , leads to for the more general case , the conditions ( [ nullstate ] ) and ( [ nullshift ] ) lead to a second - order equation for the variable , which give us two distinct solutions , namely \ , .\end{aligned}\ ] ] although both solutions are mathematically valid , the solution leads to in the disordered phase , and consequently and . on the other hand ,the solution is physically acceptable because it leads to as well as and , satisfying the normalization condition .thus , the physically valid analytical solution for the general model is given by .in particular , we have for [ which agrees with eq .( [ qc_sym ] ) ] and for .in addition , it can be shown that the null state condition ( [ nullstate ] ) for give us the solution in the disordered phase , and then .this explains the result for observed in figs .[ fig2 ] and [ fig3 ] , that was discussed in section 3 .the author acknowledges financial support from the universidade federal fluminense , brazil . | in this work we study the critical behavior of a three - state ( , , ) opinion model with independence . each agent has a probability to act as independent , i.e. , he / she can choose his / her opinion independently of the opinions of the other agents . on the other hand , with the complementary probability the agent interacts with a randomly chosen individual through a kinetic exchange . our analytical and numerical results show that the independence mechanism acts as a noise that induces an order - disorder transition at critical points that depend on the individuals flexibility . for a special value of this flexibility the system undergoes a transition to an absorbing state with all opinions . keywords : social dynamics , collective phenomenon , computer simulation , phase transition |
it is well - known that due to the fading effect , the transmission over wireless channels suffers from severe attenuation in signal strength .performance of wireless communication is much worse than that of wired communication . for the simplest point - to - point communication system , which is composed of one transmitter and one receiver only, the use of multiple antennas can improve the capacity and reliability .space - time coding and beamforming are among the most successful techniques developed for multiple - antenna systems during the last decades . however , in many situations , due to the limited size and processing power , it is not practical for some users , especially small wireless mobile devices , to implement multiple antennas . thus , recently , wireless network communication is attracting more and more attention . a large amount of effort has been given to improve the communication by having different users in a network cooperate .this improvement is conventionally addressed as cooperative diversity and the techniques cooperative schemes .many cooperative schemes have been proposed in literature .some assume channel information at the receiver but not the transmitter and relays , for example , the noncoherent amplify - and - forward protocol in and distributed space - time coding in .some assume channel information at the receiving side of each transmission , for example , the decode - and - forward protocol in and the coded - cooperation in .some assume no channel information at any node , for example , the differential transmission schemes proposed independently in .the coherent amplify - and - forward scheme in assumes full channel information at both relays and the receiver . but only channel direction information is used at relays . in all these cooperative schemes ,the relays always cooperate on their highest powers .none of the above pioneer work allow relays to adjust their transmit powers adaptively according to channel magnitude information , and this is exactly the concern of this paper .there have been several papers on relay networks with adaptive power control . in , outage capacity of networks with a single relay andperfect channel information at all nodes were analyzed .both work assume a total power constraint on the relay and the transmitter .a decode - and - forward protocol is used at the relay , which results in a binary power allocation between the relay and the transmitter . in ,performance of networks with multiple amplify - and - forward relays and an aggregate power constraint was analyzed .a distributive scheme for the optimal power allocation is proposed , in which each relay only needs to know its own channels and a real number that can be broadcasted by the receiver .another related work on networks with one and two amplify - and - forward relays can be found in . in , outage minimization of single - relay networks with limited channel - information feedbackis performed .it is assumed that there is a long - term power constraint on the total power of the transmitter and the relay . in this paper , we consider networks with a general number of amplify - and - forward relays and we assume a separate power constraint on each relay and the transmitter . due to the difference in the power assumptions , compared to , analysis of this new model is more difficult and totally different results are obtained . for multiple - antenna systems ,when there is no channel information at the transmitter , space - time coding can achieve full diversity . if the transmitter has perfect or partial channel information , performance can be further improved through beamforming since it takes advantage of the channel information ( both direction and strength ) at the transmit side to obtain higher receive snr . with perfect channel information or high quality channel information feedback from the receiver at the transmitter , one - dimensional beamformingis proved optimal .the more practical multiple - antenna systems with partial channel information at the transmitter , channel statistics or quantized instantaneous channel information , are also analyzed extensively . in many situations , appropriate combination of beamforming and space - time codingoutperforms either one of the two schemes alone . in this paper, we will see similar performance improvement in networks using network beamforming over distributed space - time coding and other existing schemes such as best - relay selection and coherent amplify - and - forward .we consider networks with one pair of transmitter and receiver but multiple relays .the receiver knows all channels and every relay knows its own channels perfectly . in networks with a direct link ( dl ) between the transmitter and the receiver, we also assume that the transmitter knows the dl fully .a two - step amplify - and - forward protocol is used , where in the first step , the transmitter sends information and in the second step , the transmitter and relays , if there is a dl , transmit .we first solve the power control problem for networks with no dl analytically .the exact solution can be obtained with a complexity that is linear in the number of relays .then , to perform network beamforming , we propose two distributive strategies in which a relay needs only its own channel information and a low - rate broadcast from the receiver .simulation shows that the optimal power control or network beamforming outperforms other existing schemes .we then consider networks with a dl during the first transmission step , the second transmission step , and both .for the first case , the power control problem is proved to be the same as the one in networks without the dl . for the other two cases , recursivenumerical algorithms are provided .simulation shows that they have much better performance compared to networks without power control .we should clarify that only amplify - and - forward is considered here . for decode - and - forward ,the result may be different and it depends on the details of the coding schemes .the paper is organized as follows . in the next section ,the relay network model and the main problem are introduced .section [ sec - pl ] works on the power control problem in relay networks with no dl and section [ sec - dl ] considers networks with a dl .section [ sec - conclusion ] contains the conclusion and several future directions .consider a relay network with one transmit - and - receive pair and relays as depicted in fig .[ fig - network ] .every relay has only one single antenna which can be used for both transmission and reception .denote the channel from the transmitter to the relay as and the channel from the relay to the receiver as .if the dl between the transmitter and the receiver exists , we denote it as .we assume that the transmitter knows , the relay knows its own channels and , and the receiver knows all channels and .the channels can have both fading and path - loss effects .actually , our results are valid for any channel statistics .we assume that for each transmission , the powers used at the transmitter and the relay are no larger than and , respectively .note that in this paper , only short - term power constraint is considered , that is , there is an upper bound on the average transmit power of each node for each transmission .a node can not save its power to favor transmissions with better channel realizations .we use a two - step amplify - and - forward protocol . during the first step, the transmitter sends . the information symbol is selected randomly from the codebook .if we normalize it as , the average power used at the transmitter is .the relay and the receiver , if a dl exists during the first step , receive respectively . and are the noises at the relay and the receiver at step 1 .we assume that they are . during the second step, the transmitter sends , if a dl exists during this step . at the same time, the relay sends the average transmit power of the relay can be calculated to be .if we assume that keeps constant for the two steps , the receiver gets is the noise at the receiver at step 2 , which is also assumed to be .note that if the transmitter sends during both steps , we assume that the total average power it uses is no larger than . with this ,the total average power in transmitting one symbol is no larger than .clearly , the coefficients are introduced in the model for power control .the power constraints at the transmitter and relays require that and .our network beamforming design is thus the design of and , such that the error rate of the network is the smallest .this is equivalent to maximize the receive snr , or the total receive snr of both branches if a dl exists during the first step . from ( [ x2-old ] ), we can easily prove that an optimal choice of the angles are and .that is , match filters should be used at relays and the transmitter during the second step to cancel the phases of their channels and form a beam at the receiver .we thus have what is left is the optimal power control , i.e. , the choice of .this is also the main contribution of our work .in this section , we investigate the optimal adaptive power control at the transmitter and relays in networks without a dl .section [ sec - pl - result ] presents the analytical power control result .section [ sec - pl - discussion ] comments on the result and gives distributive schemes for the optimal power control .section [ sec - pl - simulation ] provides simulated performance . with no dl , we have and . from ( [ x2 ] ), the receive snr can be calculated to be it is an increasing function of .therefore , the transmitter should always use its maximal power , i.e. , .the receive snr is thus : before going into details of the snr optimization , we first introduce some notation to help the presentation . indicates the inner product . indicates the 2-norm . indicates the probability . denotes the coordinate of vector and denotes the -dimensional vector ^t ] , can be decomposed into the following intervals : =[r_0,r_1]\cup [ r_1,r_2]\cup \cdots\cup [ r_{r-2},r_{r-1}]\cup [ r_{r-1},r_r].\ ] ] we denote ] for .thus , for any and , we have .hence , .combining lemma [ lemma - edge ] and lemma [ lemma - scale ] , we have and thus we have solved the inner optimization of subproblem .the solution of the subproblems can thus be obtained . for ,define the solution of subproblem is .the solution of subproblem for is that is defined as [ lemma - problem - i ] from ( [ inner - solution ] ) , subproblem is equivalent to the following 1-dimensional optimization problem : when , ( [ decom - problem-1 ] ) is equivalent to .since is an increasing function of , its maximum is at . for ,define we have , thus , if and if .so , if , the optimal solution is reached at .otherwise , the optimal solution is reached at . from ( [ zi ] ) , subproblem solved at as defined in ( [ sub - solution ] ) .now , we can work on the relay power control problem presented in ( [ opt - problem ] ) .define as the solution of the snr optimization is , where is the smallest such that .[ thm - main ] first , since , we have .thus , exists .also , since , and decreases with , we have for .this means that is in the feasible region of the optimization problem .denote note that .since , is also a feasible point of subproblem 1 .thus , due the optimality of in subproblem 1 .this means that there is no need to consider subproblem 0 . for , if , and since , is a feasible point of subproblem .thus , due the optimality of in subproblem .this means that there is no need to consider subproblem .thus , we only need to check those s with , and find the one that results in the largest receive snr . from the definition in ( [ sub - solution - x ] ) , this is the same as to check those s with .now , we prove that if . first , from , we have since , we can prove easily that thus , we only need to check those s for and find the one causing the largest receive snr . from previous discussion , . define . now , we prove that for .from the proof of lemma 3 , we have thus , the optimal power control vector that maximizes the receive snr is .it is natural to expect the power control at relays to undergo an on - or - off scenario : a relay uses its maximum power if its channels are good enough and otherwise not to cooperate at all .our result shows otherwise .the optimal power used at a relay can be any value between 0 and its maximal power .in many situations , a relay should use partial of its power , whose value is determined not only by its own channels but all others as well .this is because every relay has two effects on the transmission .for one , it helps the transmission by forwarding the information , while for the other , it harms the transmission by forwarding noise as well .its transmit power has a non - linear effect on the powers of both the signal and the noise , which makes the optimization solution not an on - or - off one , not a decoupled one , and , in general , not even a differentiable function of channel coefficients . as shown in theorem [ thm - main ] and lemma [ lemma - problem - i ] , the fraction of power used at relay satisfies for and for .thus , the relays whose s are the largest use their maximal powers . since , there is at least one relay that uses its maximum power .this tells us that the relay with the largest always uses its maximal power .the remaining relays whose s are smaller only use parts of their powers . for ,the power used at the relay is , which is proportional to since is a constant for each channel realization .although does not appear explicitly in the formula , it affects the decision of whether the relay should use its maximal power . actually , in determining whether a relay should use its maximal power , not only do the channel coefficients and power constraint at this relay account , but also all other channel coefficients and power constraints .the power constraint of the transmitter , , plays a roll as well .due to these special properties of the optimal power control solution , it can be implemented distributively with each relay knowing only its own channel information . in the following ,we propose two distributed strategies .one is for networks with a small number of relays , and the other is more economical in networks with a large number of relays .the receiver , which knows all channels , can solve the power control problem .when the number of relays , , is small , the receiver broadcasts the indexes of the relays that use their full powers and the coefficient .if relay hears its own index from the receiver , it will use its maximal power to transmit during the second step .otherwise , it will use power .the bits needed for the feedback is where is the number of relays that use their maximal powers and is the number of bits needed in broadcasting the real number . instead , the receiver can also broadcast two real numbers : and a real number that satisfies .relay calculates its own .if , relay uses its maximal power . otherwise , it uses power .the number of bits needed for the feedback is .thus , when is large , this strategy needs less bits of feedback compared to the first one .networks with an aggregate power constraint on relays were analyzed in . in this case , with the same notation in section [ sec - pl - result ] , and .the optimal solution is is a function of its own channels only and an extra coefficient , which is the same for all relays .therefore , this power allocation can be done distributively with the extra knowledge of one single coefficient , which can be broadcasted by the receiver . in our case, every relay has a separate power constraint .this is a more practical assumption in sensor networks since every sensor or wireless device has its own battery power limit .the power control solutions of the two cases are totally different .if relay selection is used and only one relay is allowed to cooperate , it can be proved easily that we should choose the relay with the highest we call the relay selection function since a relay with a larger results in a higher receive snr . while all relays are allowed to cooperate , the concepts of the best relay and relay selection function are not clear .since the power control problem is a coupled one , it is hard to measure how much contribution a relay has . as discussed before , in network beamforming, a relay with a larger does not necessarily use a larger power or has more contribution .but we can conclude that if , the fraction of power used at relay , , is no less than the fraction of power used at relay , .it is worth to mention that in network beamforming , relays with larger enough s use their maximal powers no matter what their maximal powers are .actually , it is not hard to see that if at one time channels of all relays are _ good _ , every relay should use its maximum power . in this section ,we show simulated performance of network beamforming and compare it with performance of other existing schemes .figures [ fig - r2l2 ] and [ fig - r3l2 ] show performance of networks with rayleigh fading channels and the same power constraint on the transmitter and relays . in other words , are and .the horizontal axis of the figures indicates . in fig .[ fig - r2l2 ] , simulated block error rates of network beamforming with optimal power control are compared to those of best - relay selection , larsson s scheme in with total relay power , distributed space - time coding in , and amplify - and - forward without power control ( every relay uses its maximal power ) in a 2-relay network . the information symbol is modulated as bpsk .we can see that network beamforming with optimal power control outperforms all other schemes .it is about 0.5db and 2db better than larsson s scheme and best - relay selection , respectively . with perfect channel knowledge at relays , it is better than alamouti distributed space - time coding , which needs no channel information at relays .amplify - and - forward with no power control only achieves diversity 1 , distributed space - time coding achieves a diversity slightly less than two , while best - relay selection , network beamforming , and larsson s scheme achieve diversity 2 .[ fig - r3l2 ] shows simulated performance of a 3-relay network under different schemes .similar diversity results are obtained .but for the 3-relay case , network beamforming is about 1.5db and 3.5db better than larsson s scheme and best - relay selection , respectively . in fig .[ fig - r2l2-dp ] , we show performance of a 2-relay network in which and . that is , the transmitter and the first relay have the same power constraint while the second relay has only half the power of the first relay .the channels are assumed to be rayleigh fading channels . in fig .[ fig - r2l2-distance ] , we show performance of a 2-relay network whose channels have both fading and path - loss effects .we assume that the distance between the first relay and the transmitter / receiver is 1 , while the distance between the second relay and the transmitter / receiver is 2 .the path - loss exponent is assume to be 2 .we also assume that the transmitter and relays have the same power constraint , i.e. , . in both cases ,distributed space - time coding does not apply , and larsson s scheme applies for the second case only .so , we compare network beamforming with best - relay selection and amplify - and - forward with no power control only .performance of larsson s scheme is shown in fig .[ fig - r2l2-distance ] as well .both figures show the superiority of network beamforming to other schemes .the previous section is on power control of relay networks with no dl between the transmitter and receiver . in this section , we discuss networks with a dl . as in , there are several scenarios , which we discuss separately . in this subsection , we consider relay networks with a dl during the first step only .this happens when the receiver knows that the transmitter is in vicinity and listens during the first step , while the transmitter is not aware of the dl or is unwilling to do the optimization because of its power and delay constraints .it can also happen when the transmitter is in the listening or sleeping mode during the second step .in this case , . from ( [ x1 ] ) and ( [ x2 ] ) ,the system equations can be written as = \left[\begin{array}{c } \alpha_0\sqrt{p_0}f_0 \\\alpha_0\sqrt{p_0}\sum_{i=1}^r \frac{\alpha_i|f_ig_i|\sqrt{p_i}}{\sqrt{1+\alpha_0 ^ 2|f_i|^2p_0 } } \end{array}\right]s + \left[\begin{array}{c } w_1 \\ w_2+\sum_{i=1}^r\frac{\alpha_i|g_i|\sqrt{p_i}}{\sqrt{1+\alpha_0 ^ 2|f_i|^2p_0}}e^{-j\arg f_i}v_i \end{array}\right].\ ] ] using maximum ratio combining , the ml decoding is the optimization problem is thus the maximization of the total receive snr of both transmission branches , which equals first , both terms in the snr formula increase as increases .thus , , i.e. , the transmitter should use its maximum power .the snr optimization problem becomes the one in section [ sec - pl - result ] , in which there is no dl .therefore , the power control of networks with a dl during the first step only is exactly the same as that of networks without a dl .this result is intuitive . since with a dl during the first step only , operations at both the transmitter and relays keep the same as networks without the dl .the only difference is that the receiver obtains some extra information from the transmitter during the first step , and it can use the information to improve the performance without any extra cost . for the single - relay case , it can be proved easily that to maximize the receive snr , the relay should use its maximal power as well , that is , . in this subsection ,we consider relay networks with a dl during the second step only .this happens when the transmitter knows that the receiver is at vicinity and determines to do more optimization to allocate its power between the two transmission steps .however , the receiver is unaware of the dl and is not listening during the first step .it can also happen when the receiver is in transmitting or sleeping mode during the first step .in this case , and is given in ( [ x2 ] ) .the receive snr can be calculated to be first , we show that should take its maximal value 1 , i.e. , the transmitter should use all its power .assume that is the optimal solution .define .we have . therefore , .this contradicts the assumption that is optimal .define the receiver snr can be calculated to be for any fixed , we can optimize following the analysis in section [ sec - pl - result ] .the following theorem can be proved .define for and . for any fixed , order as for , let and define defined as the receive snr is maximized at , where is the smallest such that .[ thm - main - dl-2nd ] the proof of this theorem follows the one of theorem [ thm - main ] and the lemmas it uses . as discussed in section [ sec - pl - result ] , for networks with no dl , there is no need to consider the solution of subproblem 0 .here it is different .define .if we denote the solution of subproblem 0 , ,0_r\preceq \hat{\by}\preceq\hat{\ba } } \frac{\left(a+\langle \hat{\bc } , \hat{\by}\rangle\right)^2}{1+\|\hat{\by}\|^2} ] , the that maximizes the receive snr satisfies .thus , the optimal can be found numerically by solving .it can be proved easily that when and when .thus , the maximum of is reached inside .when the power at the transmitter is high ( ) , the receive snr can be approximated by where and .it can be calculated straightforwardly that for , .\ ] ] and this is a quartic equation of , whose solutions can be calculated analytically .note that when and when .thus the maximum of is reached inside .an approximate solution of can thus be obtained analytically at high transmit powers .now we consider the cases of and . if , the system degrades to a point - to - point one since only the dl works .thus , the receive snr is . for , we can obtain the optimal using theorem [ thm - main - dl-2nd ] .thus , we obtain three sets of and for the three cases : , , and , respectively .the optimal solution of the system is the set of and corresponding to the largest receive snr .the power control problem in networks with a dl during the second step only can thus be solved using the following recursive algorithm .1 . initialization : set , the -dimensional vector of all ones , , and . set the maximal number of iterations and the threshold .2 . optimize with .denote the solution as .we can either do this numerically or calculate the high snr approximation .3 . with ,find the that maximizes the receive snr using theorem [ thm - main - dl-2nd ] .denote it as .calculate .4 . set . if and , set , , and go to step 2 .find the solution of with using theorem [ thm - main - dl-2nd ] .denote this solution as .the optimal solution is : .[ alg - pl - dl-2nd ] similarly , the distributive strategies proposed in section [ sec - pl - discussion ] can be applied here . in this subsection, we consider relay networks with a dl during both the first and the second steps .this happens when both the transmitter and the receiver know that they are not too far away from each other and decide to communicate during both steps with the help of relays during the second step . from ( [ x1 ] ) and ( [ x2 ] ), the system equation can be written as = \left[\begin{array}{c } \alpha_0\sqrt{p_0}f_0 \\\beta_0|f_0|+\alpha_0\sqrt{p_0}\sum_{i=1}^r \frac{\alpha_i|f_ig_i|\sqrt{p_i}}{\sqrt{1+\alpha_0 ^ 2|f_i|^2p_0 } } \end{array}\right]s + \left[\begin{array}{c } w_1 \\ w_2+\sum_{i=1}^r \frac{\alpha_i|g_i|\sqrt{p_i}}{\sqrt{1+\alpha_0^ 2|f_i|^2p_0}}e^{-j\arg f_i}v_i \end{array}\right].\ ] ] similar to the networks discussed in section [ sec - dl-1ststep ] , the maximum ratio combining results in the following ml decoding : the total receive snr of both transmission branches can be calculated to be the same as the networks in section [ sec - dl-2ndstep ] , should take its maximal value , which is 1 .that is , .similar to the snr optimization in section [ sec - dl-2ndstep ] , for any given , the snr maximization is the same as the maximization of , which is solved by theorem [ thm - main - dl-2nd ] .but due to the difference in the receive snr formula , the optimal given is different .it is the solution of .when the dl exists during both steps , the case of , whose receive snr is will never outperform the case of , whose receive snr is for some .thus , the case needs not to be considered .the power control problem in networks with a dl during both steps can thus be solved using the following recursive algorithm . 1 .initialization : set , , and . set the maximal number of iterations and the threshold .2 . optimize with .denote the solution as .we can do this numerically .3 . with ,find the that maximizes using theorem [ thm - main - dl-2nd ] .denote it as .calculate .4 . set . if and , set , and go to step 2 .find the solution of with using theorem [ thm - main - dl-2nd ] .denote this solution as .the optimal solution is : .[ alg - pl - dl - both ] again , the distributive strategies proposed in section [ sec - pl - discussion ] can be applied here . in this subsection, we compare single - relay networks in which the power constraints at the transmitter and the relay are same , i.e. , .the channels are assumed to have both the fading and path - loss effect .there are four cases : no dl , a dl during the first step only , a dl during the second step only , and a dl during both steps . in fig .[ fig - triangle - network ] , we compare networks in which the distance of every link is the same , i.e. , the three nodes are vertexes of an equilateral triangle with unit - length edges as shown in fig .[ fig - equallacteral - network ] .we can see that the network with no dl has diversity 1 while networks with a dl and power control achieve diversity 2 .the network with a dl during the first step performs less than 0.5db better than the network with a dl during the second step only , while the network with a dl during both steps performs the best ( about 1db better than the network with a dl during the first step only ) . to illuminate the effect of power control , we show performance of networks whose transmit power at the relay and transmitter are fixed . for the network with a dl during the first step only, there is no power control problem since it is optimal for both the transmitter and the relay to use their maximal powers . for the other two cases , we let the transmitter uses half of its power , , to each of the two steps and the relay always uses its maximum power .we can see that , if the dl only exists during the second step , without power control , the achievable diversity is 1 . at block error rates of and , it performs 3 and 6db worse , respectively . for networks with a dl during both steps , power control results in a 1.5db improvement . in fig .[ fig - line - network - e2 ] and [ fig - line - network - e3 ] , we show performance of line networks with path - loss exponents 2 and 3 respectively . as shown in fig .[ fig - line - network ] , the three nodes are on a line and the relay is in the middle of the transmitter and receiver .the distance between the transmitter and receiver is assumed to be 2 .the same phenomenon as in the equilateral triangle networks can be observed .the network with a dl during both steps performs the best ( about 1db better than the network with a dl during the first step only ) .the network with a dl at first step only performs slightly better than the one with a dl during the second step only .but the difference is smaller than that in fig .[ fig - triangle - network ] .the performance difference between line networks with and without dls is smaller than those in equilateral triangle networks , and it gets even smaller for larger path - loss exponents .this is because as the distance between the transmitter and receiver or the path - loss exponent is larger , the quality of the dl is lower .therefore , the improvement due to this link is smaller . for both cases , power control results in a 1.5db improvement when the dl link exists for both steps and a higher diversity when the dl exists for the second step only .then we work on the random network in fig .[ fig - random - network ] , in which the relay locates randomly and uniformly within a circle in the middle of the transmitter and the receiver .the distance between the transmitter and the receiver is assumed to be 2 .the radius of the circle is denoted as .we assume that .this is a reasonable model for ad hoc wireless networks since if communications between two nodes is allowed to be helped by one other relay , one should choose a relay that is around the middle of the two nodes .in other words , the distance between the relay and the transmitter or receiver should be shorter than that between the transmitter and receiver .we work out the geometry first . as in fig .[ fig - random - network ] , we denote the positions of the transmitter , the receiver , the relay , and the middle point of the transmitter and the receiver as , and , respectively .denote the angle of and as and the length of as .the lengths of and are thus and .since is uniformly distributed within the circle , is uniform in and the pdf and cdf of can be calculated to be respectively .define .if is uniform on , it can be proved that thus , has the same distribution as .therefore , we generate to represent . fig .[ fig - random - network - p ] shows performance of random networks with path - loss exponent 2 and .we can see that the same phenomenon as in line networks can be observed . with a dl at both steps, the random network performs about 1db worse than the line network .in this paper , we propose the novel idea of beamforming in wireless relay networks to achieve both diversity and array gain .the scheme is based on a two - step amplify - and - forward protocol .we assume that each relay knows its own channels perfectly .unlike previous works in network diversity , the scheme developed here uses not only the channels phase information but also their magnitude .match filters are applied at the transmitter and relays during the second step to cancel the channel phase effect and thus form a coherent beam at the receiver , in the mean while , optimal power control is performed based on the channel magnitude to decide the power used at the transmitter and relays .the power control problem for networks with any numbers of relays and no direct link is solved analytically .the solution can be obtained with a complexity that is linear in the number of relays .the power used at a relay depends on not only its own channels nonlinearly but also all other channels in the network . in general , it is not even a differentiable function of channel coefficients .simulation with rayleigh fading and path - loss channels show that network beamforming achieves the maximum diversity while amplify - and - forward without power control achieves diversity 1 only .network beamforming also outperforms other cooperative strategies .for example , it is about 4db better than best - relay selection .relay networks with a direct link between the transmitter and receiver are also considered in this paper . for networks with a direct link during the first step only , the power control at relays and the transmitter is exactly the same as that of networks with no direct link . for networks with a direct link during the second step only and networks with a direct link during both steps , the solutions are different .recursive numerical algorithms for the power control at both the transmitter and relays are given .simulated performance of single - relay networks with different topologies shows that optimal power control results in about 1.5db improvement in networks with a direct link at both steps and a higher diversity in networks with a direct link at the second step only . we have just scratched the surface of a brand - new area .there are a lot of ways to extend and generalize this work .first , it is assumed in this work that relays and sometimes the transmitter know their channels perfectly , which is not practical in many networks .network beamforming with limited and delayed feedback from the receiver is an important issue . in multiple - antenna systems , beamforming with limited and delayed channel information feedback has been widely probed . however , beamforming in networks differs from beamforming in multiple - antenna systems in a couple of ways . in networks , it is difficult for relays to cooperate while in a multiple - antenna system , different antennas of the transmitter can cooperate fully .there are two transmission steps in relay networks while only one in multiple - antenna systems , which leads to different error rate and capacity calculation and thus different designs .second , the relay network probed in this paper has only one pair of transmitter and receiver . when there are multiple transmitter - and - receiver pairs , an interesting problem is how relays should allocate their powers to aid different communication tasks .finally , the two - hop protocol can be generalized as well . for a given network topology ,one relevant question is how many hops should be taken to optimize the criterion at consideration , for example , error rate or capacity .y. chang and y. hua , `` application of space - time linear block codes to parallel wireless relays in mobile ad hoc networks , '' in _ prof . of the 36th asilomar conference on signals , systems and computers _ , nov . 2003 .y. hua , y. mei , and y. chang , `` wireless antennas - making wireless communications perform like wireline communications , '' in _ prof . of ieeeap - s topical conference on wireless communication technology _ , oct .y. tang and m. c. valenti , `` coded transmit macrodiversity : block space - time codes over distributed antennas , '' in _ prof . of ieee vehicular technology conference 2001-spring _ , vol . 2 , pp .1435 1438 , may 2001 .a. sendonaris , e. erkip , and b. aazhang , `` user cooperation diversity - part ii : implementation aspects and performance analysis , '' _ ieee transactions on communications _ , vol .51 , pp . 19391948 , nov .r. u. nabar , h. b , and f. w. kneubuhler , `` fading relay channels : performance limits and space - time signal design , '' _ ieee journal on selected areas in communications _ , pp .1099 1109 , aug . 2004 .j. n. laneman and g. w. wornell , `` distributed space - time - coded protocols for exploiting cooperative diversity in wireless network , '' _ ieee transactions on information theory _49 , pp . 2415 - 2425 , oct . 2003 .m. janani , a. hedayat , t. e. hunter , and a. nosratinia , `` coded cooperation in wireless communications : space - time transmission and iterative decoding , '' _ ieee transactions on signal processing _ , pp .362 - 371 , feb . 2006 .k. azarian , h. e. gamal , and p. schniter , `` on the achievable diversity - multiplexing tradeoff in half - duplex cooperative channels , '' _ ieee transactions on information theory _ , vol .51 , pp .4152 - 4172 , dec . 2005 .j. n. laneman , d. n. c. tse , and g. w. wornell , `` cooperative diversity in wireless networks : efficient protocols and outage behavior , '' _ ieee transactions on information theory _ , pp .3062 - 3080 , dec . 2004 .y. cao , b. vojcic , and m. souryal , `` user - cooperative transmission with channel feedback in slow fading environment , '' in _ proc . of ieeevehicular technology conference 2004-fall _ , pp . 2063 - 2067 , 2004 .p. a. anghel and m. kaveh ,`` on the performance of distributed space - time coding systems with one and two non - regenerative relays , '' _ ieee transactions on wireless communications _, pp . 682 - 692 , mar .2006 .a. narula , m. l. lopez , m. d. trott , and g. wornell , `` efficient use of side information in multiple - antenna data transmission over fading channels , '' _ ieee journals on selected areas in communications _ , pp .1423 - 1436 , oct . 1998 .s. a. jafar and a. j. goldsmith , `` transmit optimization and optimality of beamforming for multiple antenna systems with imperfect feedback , '' _ ieee transactions on wireless communications _, pp . 1165 - 1175 , july 2004 . k. k. mukkavilli , a. sabharwal , e. erkip , and b. aazhang , `` on beamforming with finite rate feedback in multiple - antenna systems , '' _ ieee transactions on information theory _ , pp .2562- 2579 , oct .2003 .s. ekbatani and h. jafarkhani , `` combining beamforming and space - time coding for multi - antenna transmitters using noisy quantized channel direction feedback , '' _ submitted to ieee transactions on communications _ , 2006 . | this paper is on beamforming in wireless relay networks with perfect channel information at relays , the receiver , and the transmitter if there is a direct link between the transmitter and receiver . it is assumed that every node in the network has its own power constraint . a two - step amplify - and - forward protocol is used , in which the transmitter and relays not only use match filters to form a beam at the receiver but also adaptively adjust their transmit powers according to the channel strength information . for a network with any number of relays and no direct link , the optimal power control is solved analytically . the complexity of finding the exact solution is linear in the number of relays . our results show that the transmitter should always use its maximal power and the optimal power used at a relay is not a binary function . it can take any value between zero and its maximum transmit power . also , surprisingly , this value depends on the quality of all other channels in addition to the relay s own channels . despite this coupling fact , distributive strategies are proposed in which , with the aid of a low - rate broadcast from the receiver , a relay needs only its own channel information to implement the optimal power control . simulated performance shows that network beamforming achieves the maximal diversity and outperforms other existing schemes . then , beamforming in networks with a direct link are considered . we show that when the direct link exists during the first step only , the optimal power control at the transmitter and relays is the same as that of networks with no direct link . for networks with a direct link during the second step only and both steps , recursive numerical algorithms are proposed to solve the power control problem . simulation shows that by adjusting the transmitter and relays powers adaptively , network performance is significantly improved . |
ionic microgel and nanogel particles formed by cross - linked polyelectrolyte networks displays many remarkable properties which make them suitable for applications in drug - delivery systems , where molecules can be encapsulated and then released at specific targets .this is possible through a swelling ( or de - swelling ) process , where solvent molecules flow into ( or leave ) the cross - linked network .experimentally it is well - known that the properties of ionic microgels and nanogels particles can be significantly affected by the changes in temperature , solvent quality , salt concentration , ionic strength , and degree of cross - linking .although these effects have been extensively studied in ionic macrogels , little is known about ionic nanogels formed by finite - size cross - linked polyelectrolyte networks .theoretically , the swelling processes in polyelectrolyte networks have been studied using flory s theory , which combines the mixing term , electrostatic , and elastic contributions into the total free energy of the system .an important aspect of the theory is the relationship between the ( effective ) number of chains of the network and the elastic free energy , _ i.e. _ .clearly , the unambiguous interpretation of this relationship in terms of the network topology and its validation is relevant to the development of predictive theories , _ e.g. _ . in principle, such relationship can be verified by computer simulations considering explicit network structures , but to our knowledge current studies have so far only exploited polyelectrolyte networks with diamond - like structures ( _ i.e. _ with tetrafunctional cross - links ) . herewe explore this issue by considering three types of nanogel particles generated by polyelectrolyte networks of different topologies characterized by the functionality ( also known as coordination number or connectivity ) of the cross - links .in particular , we will investigate how the equilibrium properties of the nanosized gel particles are affected by the fraction of the ionizable groups and salt concentration in solution .these important effects have also been explored by experiments and theory .as illustrated in fig .[ systemdef : zcoord](a ) , the nanogel particle is modeled as a polyelectrolyte network with monomers inside a spherical wigner - seitz ( ws ) cell of volume , where is the number density of nanogel particles in solution . the radius of the ws cell is then .some monomers of the network are ionizable ( anionic ) and will dissociate releasing a counterion into solution . the fraction of dissociation determines the number of negatively charged monomers in the network , and charge neutrality requires counterions in solution .the counterions are allowed to diffuse everywhere inside the ws cell . for simplicity, we will assume that all the ions are spherical while the solvent , water , is modeled as a dielectric continuum . in order to investigate the effect of topology on the swelling of the nanogel particle , we study three different networks that are arbitrarly generated from a regular structures determined by the functionality of the cross - links .as illustrated in fig .[ systemdef : zcoord](b)-(d ) , the networks are built considering cross - links with , , and , that are connected to each other through chains with monomers .the final networks are obtained by cropping the large regular template structures ( with the number of monomers larger than ) , so that only the monomers inside a spherical volume comprise the nanogel particle .importantly , we have selected such spherical volume in a way that all the different networks are formed by approximately the same number of monomers , which leads to different number of cross - links with functionality , as shown in table [ parameters ] .note that this procedure leads to networks with higher effective cross - link line density as is lowered . also , the cropping procedure inevitably results in dangling chains ( which could have less than monomers ) near the surface of the polyelectrolyte network , as illustrated in fig .[ systemdef : zcoord](a ) . .the dissociation fraction determines the number of charged monomers with ionizable ( anionic ) groups ( shown in dark blue ) , and the number counterions in solution ( shown in green ) ; neutral monomers are displayed as light blue spheres .inner circle represents the radius of nanogel particle defined as the radius of gyration of the polyelectrolyte network .( b - d ) schematic representation of cross - links with functionality that connect chains with monomers and are used to generate the network with different topologies . ]ccccc + & & & & + 4 & 2808 & 152 & 104 & 0.054 + 5 & 2524 & 108 & 108 & 0.043 + 6 & 2715 & 93 & 78 & 0.035 + + [ parameters ] next we introduce the interaction potentials between the components of the system .to describe a nanogel particle immersed in an implicit athermal solvent , where there is a slight preference of polyelectrolytes to be surrounded by solvent molecules , our model assumes that all monomers in the network interact via a non - bonded , shifted and truncated repulsive lennard - jones potential , _i.e. _ the weeks - chandler - andersen ( wca ) potential commonly used in simulations of polyelectrolytes ( see _ e.g. _ ) , which is given by ~,\ ] ] if the distance between and monomers is less than a cutoff radius , or zero otherwise ; here and are parameters that determine the energy and length scales , respectively ; and , where is the temperature and is the boltzmann s constant . in addition , adjacent monomers in the network ( which are defined _ a priori _ by construction ) interact via a finitely extensible nonlinear elastic ( fene ) potential , ~~ , \label{fenepotential}\ ] ] where is the spring constant , defines the minimum of the potential ( _ i.e. _ its equilibrium distance ) , and is the maximum extension allowed .all charged particles ( _ i.e. _ monomers and ions ) interact via an electrostatic potential which can be written as where is the bjerrum length and or 0 , depending of the charges of the interacting particles .we also include a hard core ( excluded volume ) potential between all particles ( _ i.e. _ monomer - monomer , ion - monomer and ion - ion ) which is if or 0 , otherwise .this potential precludes any two particles to be closer than a distance . in order to obtain the equilibrium properties of the system we performmonte carlo ( mc ) simulations with standard metropolis acceptance criteria , where the transition probability is given by }\right)12 & 12#1212_12%12[1][0] * * , ( ) * * , ( ) _ _ ( , ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) _ _ ( , ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) _ _ ( , ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) | it is well - known that the swelling behavior of ionic nanogels depends on their cross - link density , however it is unclear how different topologies should affect the response of the polyelectrolyte network . here we perform monte carlo simulations to obtain the equilibrium properties of ionic nanogels as a function of salt concentration and the fraction of ionizable groups in a polyelectrolyte network formed by cross - links of functionality . our results indicate that the network with cross - links of low connectivity result in nanogel particles with higher swelling ratios . we also confirm a de - swelling effect of salt on nanogel particles . |
consider the task of solving the linear system : where is a unitary matrix , the identity matrix , the hermitian wilson operator , and the bare fermion mass .the overlap operator is non - hermitian . for such operators gmres ( generalised minimal residual ) and fom ( full orthogonalisation method )are known to be the fastest .it is shown that when the norm - minimising process of gmres is converging rapidly , the residual norms in the corresponding galerkin process of fom exhibit similar behaviour . but they are based on long recurrences and thus require to store a large number of vectors of the size of matrix columns . however , exploiting the fact that the overlap operator is a shifted unitary matrix one can construct a gmres type algorithm with short recurrences . similarly , a short recurrences algorithm can be obtained from fom .the method is based on an observation of rutishauser that for upper hessenberg unitary matrices one can write , where and are lower and upper bidiagonal matrices . applying this decomposition for the arnoldi iteration : one obtains an algorithm which constructs arnoldi vectors by short recurrences : projecting the linear system ( [ lin_sys ] ) onto the krylov subspace one gets : which can be equivalently written as : note that the matrix on the left hand side is tridiagonal .it can be shown that one can solve this system and therefore the original system using short recurrences .the resulting algorithm is called the shifted unitary orthogonal method ( suom ) and is given below : stop if tol note that in an actual implementation one can store and as separate vectors , which can be used in the subsequent iteration to compute .therefore only one multiplication by is needed at each step .a straightforward application of multigrid algorithms is hopeless in the presence of non - smooth gauge fields .however , the situation is different for the 5-dimensional formulation of chiral fermions where there are no gauge connections along the fifth dimension .here , i will limit my discussion in the easiest case which consists of two grids : the `` fine '' grid , which is the continuum along the fifth coordinate and a coarse grid , which is the lattice discretisation of the `` fine '' grid .i define chiral fermions on the coarse grid using truncated overlap fermions .the corresponding 5-dimensional matrix in blocked form is given by : where .let be the above matrix but with bare quark mass and the permutation matrix : it can be shown that the following result hold : [ proposition ] let be the linear system defined on the 5-dimensional lattice with and .then is the solution of the linear system , where as .this result lends itself to a special two - grid algorithm .indeed , is the ( fifth euclidean ) coordinate of interest since it contains the information about the 4-dimensional physics . let let solve until stop if tol one way of exploiting this is to use `` decimation '' over the fifth coordinate in order to get the 5d - vector . using proposition [ proposition ] one can evaluate directly the first 4d - component of by , being an approximate solution .the rest can be padded with zero 4d - vectors .the second step is to solve the problem on the coarse grid .finally , one can extract the 4d - solution on this grid and correct the `` fine '' grid solution by . in the second cycle onehas to repeat the same decimation method , since the `` fine '' 5d - operator is not available .hence , the whole scheme is a restarted two - grid algorithm , which is given here as algorithm [ two_grid_algor ] .in fig . 1 is shown the convergence of variuos algorithms as a function of wilson matrix - vector multiplication number on a fixed gauge background on a lattice at .the convergence is measured using the norm of the residual error . for the overlap matrix - vector multiplicationis used the double pass lanczos algorithm ( without small eigenspace projection of ) as described in . together with the algorithms described in the previous sections fig .1 shows the performance of conjugate residuals ( cr ) , conjugate gradients on normal equations ( cgne ) and cg - chi .the latter is the cgne which solves simultaneously the decoupled chiral systems appearing in the matrix .one can observe a gain over cgne which may be explained due to the reduced number of eigenvalues at each chiral sector .however , this gain is no more than . on the other hand suom and cr preformrather similar with suom being slightly faster in this scale .the gain over cgne is about a factor two .the two - grid algorithm performs the best with a gain of at least a factor 6 over suom and more than an order of magnitude over cgne .this situation repeats itself for a different gauge configuration which is not shown here for the lack of space .however , if the projection of small eigenvalues is used the gain over suom / cr should be smaller since the two - grid algorithm is much less intensive in the application of the overlap operator .it is exactly the purpose of this comparison to make clear this feature of the two - grid algorithm .finally , it is ( not ) surprising that suom and cr perform similarly : cr can be shown to be an efficient method for normal matrices .since it is easier to implement cr is more appealing than other krylov solvers . | in this paper i describe a new optimal krylov subspace solver for shifted unitary matrices called the shifted unitary orthogonal method ( suom ) . this algorithm is used as a benchmark against any improvement like the two - grid algorithm . i use the latter to show that the overlap operator can be inverted by successive inversions of the truncated overlap operator . this strategy results in large gains compared to suom . it is well - known that overlap fermions lead to much more expensive computations than standard fermions , i.e. wilson or kogut - sussking fermions . this is obvious since for every application of the overlap operator an extra linear system solving is needed . for the time being , it seems that to get chiral symmetry at finite lattice spacing one should wait for a petaflops computer being built . however , algorithmic research is far from exhausted . in this paper i give an example that this is the case if one uses the two - grid algorithm . before i do this , i introduce briefly an optimal krylov subspace solver for shifted unitary matrices . |
the volume operator in loop quantum gravity ( lqg ) plays a central role in the quantum dynamics of the theory . without it ,it is not possible to define the hamiltonian constraint operators , the master constraint operators , or the physical hamiltonian operators . the same applies to truncated models of lqg such as loop quantum cosmology ( lqc ) which is believed to describe well the homogeneous sector of the theory . in lqcthe quantum evolution of operators that correspond to classically - singular dirac observables typically remains finite .this can be traced back to the techniques used to quantize inverse powers of the local volume that enter into the expression for the triad , co - triad and other types of coupling between geometry and geometry , or geometry and matter .specifically , such derived operators arise as commutators between holonomy operators and powers of the volume operator . in view of its importance , it is of considerable interest to verify whether the classical limit of the lqg volume operator coincides with the classical volume . by thiswe mean that ( i ) the expectation value of the volume operator with respect to suitable semiclassical states which are peaked at a given point in phase space , coincides with the value of the classical volume at that phase space point , up to small corrections ; and ( ii ) its fluctuations are small .it should be remarked that there are actually two versions of the volume operator that have been discussed in the literature .these come from inequivalent regularisations of the products of operator - valued distributions that appear at intermediate stages .however , only the operator in survives the consistency test of , namely , writing the volume in terms of triads which then are quantised using the commutator mentioned above , gives the same operator up to -corrections as that obtained from direct quantisation .this consistency check is important as otherwise we could not trust the triad and co - triad quantisations that enter the quantum dynamics .a semiclassical analysis of the volume operator has not yet been carried out , although , in principle , suitable semiclassical ( even coherent ) states for lqg are available .this is because the spectral decomposition ( projection - valued measure ) of the volume operator can not be computed exactly in closed form .however , this is needed for exact , practical calculations .more precisely , the volume operator is the fourth root of a positive operator , , whose matrix elements can be computed in closed form but which can not be diagonalised analytically . in more detail , the volume operator has a discrete ( that is , pure - point ) spectrum and it attains a block - diagonal form where the blocks are labelled by the graphs and spin quantum numbers ( labelling the edges of the graph ) of spin - network functions ( snwf ) .snwf form a convenient basis of the lqg hilbert space which carries a unitary representation of the spatial diffeomorphism group and the poisson of the elementary flux and holonomy variables .the blocks turn out to be finite - dimensional matrices whose matrix elements can be expressed in terms of polynomials of 6j symbols . fortunately , these complicated quantities can be avoided by using a telescopic summation technique related to the elliot - biedenharn identity , so that a manageable closed expression results .however , the size of these matrices grows exponentially with increasing spin quantum numbers and , since the expression for coherent states is a coherent superposition of snwf s with arbitrarily large spin quantum numbers , a numerical computation of the expectation value using the numerical diagonalisation techniques , that are currently being developed , is still a long way off .one way forward is to use the semiclassical perturbation theory developed in , and applied already in .the basic idea is quite simple . in practical calculationsone needs the expectation value of where is a rational number in the range . in order to attainthat , one has to introduce the ` perturbation operator ' , , where the expectation value of the positive operator is exactly computable .notice that is bounded from below by .then trivially ^q> ] .however , and /<q>^2 ] where is a positive number .likewise , using the availability of the chart , we take the edges of the graph to be embeddings of straight lines in ( with respect to the euclidean background metric available there ) , that is , where is a vector in and defines the beginning point of the edge .after these preparations , we can now analyse ( [ 4.17 ] ) and ( [ 4.18 ] ) further . recall that [ 4.19 ] e^e_j=_,i dt e_j(p^i_t ) ( p^i_t , e ) and [ 4.20 ] t^_ee=_i dt ( p^i_t , e ) ( p^i_t , e ) by the assumption about the graphs made above , the signed intersection number takes at most the numbers and independently of , so that for certain which takesthe value if the orientation of agrees with that of the leaves of the foliation , if it disagrees , and if it lies inside a leaf . if we assume that the electric field is slowly varying at the scale of the graph ( and hence at the scale of the plaquettes as well ) then we may write [ 4.21 ] e^e_j _ i t^_e e ^i_ee_j(p^i_v ) where and is the vertex at which is adjacent and which is under consideration in .it follows that ( [ 4.18 ] ) can be written as [ 4.22 ] y^e_j=()^2_e,k ^e e_jk _ i t^_e e ^i_e e_k(p^i_v ) = ( ) ^2_e ^e e_i ^i_e e_j(p^i_v ) where we have used .now , by construction , with off - diagonal and with small entries [ 4.23 ] b_ee= which are of the order of since two distinct edges will typically only remain in the same stack for a parameter length , while the parameter length of an edge is . now notice that under the assumptions we have made , we have if are not adjacent .define to be the subset of edges which are adjacent to . then [4.24 ] ||b x||^2 & = & _ e_e , e^s_e x_e b_e e b_e e^ x_e^ + & & [ _ e , e b_e e^2 ] _e ^2 + & & ( ) ^2 _ e ^2 + & & ( ) ^2 m _ e _es_e x_e^2 + & = & ( ) ^2 m _x_e^2 _ e _s_e(e ) + & & ( ) ^2 m^2 ||x||^2 here , in the second step we estimated the matrix elements of from above ; in the third step we applied the schwarz inequality ; in the fourth step we estimated , where is the maximal valence of a vertex in ; and in the sixth step we exploited the symmetry [ 4.25 ] _s_e(e ) = \ { it follows that for , is bounded from above by unity .therefore , the geometric series converges in norm .hence we are able to consider the effects of a non - diagonal edge metric up to arbitrary order , , in . herewe will consider only and write .but before considering corrections from the off - diagonal nature of notice that , to zeroth - order in , equation ( [ 4.22 ] ) becomes simply [ 4.25a ] y^e_j = ( ) ^2 _ i ^i_e e_j(p^i_v ) inserting ( [ 4.25a ] ) into ( [ 4.17 ] ) we find [ 4.26 ] < > _ z , ( ) ^3 where [ 4.27 ] ( p_v)=_ijk^abc _ [ 0,1)^2 d^2u n_at_i(v)^_i(v ) i(u ) _ [ 0,1)^2 d^2u n_b t_j(v)^_j(v ) i(u ) _ [ 0,1)^2 d^2u n_c t_k(v)^_k(v ) i(u ) on recalling that with for , we find [ 4.28 ] ( p_v)l^6 _ x(s)=v hence ( [ 4.26 ] ) becomes [ 4.29 ] < > _ z , ( ) ^3 |_x(s)=v| we can draw an important conclusion from expression ( [ 4.29 ] ) .namely , the first _ three _ factors approximate the classical volume as determined by of an embedded cube with parameter volume . when we sum ( [ 4.29 ] ) over the vertices of , which have a parameter distance , , from each other where by assumption , then the volume expectation value only has a chance to approximate the classical volume when the graph is such that , or .this could never have been achieved for and explains why we had to rescale the labels of the coherent states by while keeping the classicality parameter at .see our companion paper for a detailed discussion .there we also explained why one must have actually equal to and not just of the same order : while one could use this in order to favour other valences of the volume operator , the expectation value of other geometrical operators such as area and flux would be incorrect .assuming we write ( [ 4.29 ] ) as [ 4.30 ] <> _ z,=:v_v(e ) g_,v thereby introducing the graph geometry factor .it does not carry any information about the phase space , only about the embedding of the graph relative to the leaves of the three - foliations . from the fact that ( [ 4.30 ] ) reproduces the volume of a cube up to a factor, we may already anticipate that the geometry factor will be close to unity for , at most , a cubic graph . whether this holds for an arbitrary orientation of the graph with respect to the stack family will occupy a large part of the analysis which follows .we start by investigating the behaviour of the graph geometry factor under diffeomorphisms , , of , that is , under while the linearly - independent families of stacks are left untouched .this will answer the question of how much the geometry factor depends on the relative orientation of the graph with respect to the stacks .in fact , the orientation factor is invariant under diffeomorphisms of the spatial manifold .the signature factor [ 4.31 ] _ e , ee^()=_ijk ^i_e ^j_e ^k_e^ is obviously invariant under any diffeomorphism that preserves the foliations , i.e. which map leaves onto leaves , because [ 4.32 ] ^i_e=_e dx^a _ abc _ l_it dy^bdy^c ( x , y ) where is any leaf in which intersects transversely .since we consider graphs whose edges are embedded lines in with the same embedding that defines the stacks , it follows that the geometry factor is invariant under any embedded global translations in .next , since global rescaling in preserves the foliations and the topological invariant ( [ 4.32 ] ) , the geometry factor is also invariant under embedded global rescalings of .finally , any embedded global rotations of that preserves all the orientation factors will leave the geometry factors invariant .since the orientation factors only take the values ( depending on whether an edge agrees , disagrees with the orientation of the leaves , or lies within a leaf ) , there will be a vast range of euler angles for which this condition is satisfied , if the graph is an embedded , regular lattice of constant valence .hence , in order to check whether the geometry factor is rotationally invariant under any rotation we need only worry about those rotations which lead to changes in the .likewise , if we rotate a graph which is dual to a polyhedronal complex , we expect that the expectation value remains invariant as long as the graph remains dual to the complex . fortunately , using the explicit formulae derived for the edges and vertices for - , - , -valent graphs displayed in we can calculate the for each edge . intuitively , it is clear , that whenever many of the change from to , we can expect a drastic change of the expectation value . however , one has to take into account the combined effect of these changes , and this is what makes rotational invariance possible . as a first stepwe determine the action of a rotation on the sign factors . in what follows we discuss the cases that show a drastic change in the value of caused by a change in the values of .to carry out this calculation we will perform a rotation of each of the three different types of lattice analysed so far : namely , the - , - and -valent lattices .these rotations will be parameterised by euler angles and will be centred at a particular vertex of the lattice , for example .the effects of a rotation will depend on the distance of the vertices from the centre of the rotation .in fact , the position of each vertex in the lattice after rotation will depend on both the distance from the centre of the rotation and the euler angles used in the rotation .fortunately , the values of the terms will not depend on the former but only on the latter .this is easy to see since the value of can be either 1 , -1 or 0 depending on whether the edge is outgoing , ingoing , or lies on the plaquette in the direction .thus it will only depend on the angle the edge makes with the perpendicular to the plaquette in any given direction , i.e. it will depend on the angles the edge makes with respect to a coordinate system centred at the vertex at which the edge is incident .clearly , only the values of the euler angles of the rotation will affect the angles each edge has with respect to the vertex at which it is incident .in particular , since the graph we are using is regular , following the rotation , all edges which were parallel to each other will remain such , and thus will have the same angles with respect to the vertex at which they are incident .this implies that in order to compute the values of the terms , we can consider each vertex separately and apply the same rotation to each vertex individually . on the other hand, the distance from the centre of the rotation affects the position of each vertex with respect to the plaquette structure , and thereby affects both the values of the terms and the number of them that are different from zero .these effects can be easily understood with the aid of the two - dimensional diagram ( figure [ fig1 ] ) .it is clear that , for any two parallel edges , the angle each of them has with respect to the vertex at which they are incident , is independent of the distance of the edge from the centre of rotation . on the other hand , the values of the will depend on both the rotation and the distance of the centre of rotation , since the position of the rotated vertex with respect to the plaquette depends on both these parameters .therefore , we can tentatively assume that two different geometric factors will be involved in the computation of the volume operator : 1 . , which indicates how the terms are affected by rotation .this geometric factor affects all orders of approximation of the expectation value of the volume operator .2 . , which indicates the effect of rotation on the terms .this term affects only the first- and higher - order approximations of the expectation value of the volume operator , not the zeroth - order . inwhat follows we will analyse the geometric term : i.e. , we will analyse the changes in the values of the due to a rotation applied at each vertex independently .we will do this for the 4- , 6- and 8-valent graphs separately . the geometric factor will be analysed in subsequent sections .as we will not see , our calculations show that for all 4- , 6- and 8-valent graphs , the rotations that produce drastic change in the values of have measure zero in since they occur for specific euler angles rather than for a range of them .let us start with the 6-valent graph ( figure [ fig6 ] ) .in order to compute the change in the values of the individual , we will divide into eight small sub - cubes the cube formed by the intersection of the plaquettes in the three directions and containing the vertex we are analysing ( ) .it is then easy to see that for each edge , the corresponding value of depends on the sub - cube in which it lies . in particular, we have the following table for the values of . where , , , , , , and the quantities , , are the - , - , -coordinates of the vertex . in all the above formulae the term comes from the fact that for all edges . c. rovelli .loop quantum gravity , _ living rev .* 1 * ( 1998 ) , 1 .[ gr - qc/9710008 ] + a. ashtekar and j. lewandowski .background independent quantum gravity : a status report . _* 21 * ( 2004 ) , r53 .[ gr - qc/0404018 ] + t. thiemann .lectures on loop quantum gravity .notes phys . _* 631 * ( 2003 ) , 41 - 135 .[ gr - qc/0210094 ] t. thiemann .anomaly - free formulation of non - perturbative , four - dimensional lorentzian quantum gravity ._ physics letters _ * b380 * ( 1996 ) , 257 - 264 .[ gr - qc/9606088 ] + t. thiemann .quantum spin dynamics ( qsd ) .quantum grav ._ * 15 * ( 1998 ) , 839 - 73 .[ gr - qc/9606089 ] + t. thiemann .quantum spin dynamics ( qsd ) : ii .the kernel of the wheeler - dewitt constraint operator . _ class .quantum grav . _ * 15 * ( 1998 ) , 875 - 905 .[ gr - qc/9606090 ] + t. thiemann .quantum spin dynamics ( qsd ) : iii . quantum constraint algebra and physical scalar product in quantum general relativity . _ class .quantum grav . _ * 15 * ( 1998 ) , 1207 - 1247 .[ gr - qc/9705017 ] + t. thiemann .quantum spin dynamics ( qsd ) : iv .2 + 1 euclidean quantum gravity as a model to test 3 + 1 lorentzian quantum gravity .quantum grav ._ * 15 * ( 1998 ) , 1249 - 1280 .[ gr - qc/9705018 ] + t. thiemann .quantum spin dynamics ( qsd ) : v. quantum gravity as the natural regulator of the hamiltonian constraint of matter quantum field theories . _ class .quantum grav . _* 15 * ( 1998 ) , 1281 - 1314 .[ gr - qc/9705019 ] + t. thiemann .quantum spin dynamics ( qsd ) : vi . quantum poincar algebra and a quantum positivity of energy theorem for canonical quantum gravity . _ class .quantum grav . _ * 15 * ( 1998 ) , 1463 - 1485. [ gr - qc/9705020 ] + t. thiemann .kinematical hilbert spaces for fermionic and higgs quantum field theories .quantum grav . _ * 15 * ( 1998 ) , 1487 - 1512 .[ gr - qc/9705021 ] t. thiemann .the phoenix project : master constraint programme for loop quantum gravity . _ class .* 23 * ( 2006 ) , 2211 - 2248 .[ gr - qc/0305080 ] + t. thiemann .quantum spin dynamics ( qsd ) : viii . the master constraint . _ class .* 23 * ( 2006 ) , 2249 - 2266 .[ gr - qc/0510011 ] k. giesel and t. thiemann .consistency check on volume and triad operator quantisation in loop quantum gravity .i. _ class .* 23 * ( 2006 ) , 5667 - 5691 .[ gr - qc/0507036 ] + k. giesel and t. thiemann .consistency check on volume and triad operator quantisation in loop quantum gravity .* 23 * ( 2006 ) , 5693 - 5771 .[ gr - qc/0507037 ] t. thiemann and o. winkler .gauge field theory coherent states ( gcs ) : ii .peakedness properties .* 18 * ( 2001 ) , 2561 - 2636 .[ hep - th/0005237 ] + t. thiemann and o. winkler .gauge field theory coherent states ( gcs ) : iii .ehrenfest theorems . _ class .* 18 * ( 2001 ) , 4629 - 4681 .[ hep - th/0005234 ] + t. thiemann and o. winkler .gauge field theory coherent states ( gcs ) : iv .infinite tensor product and thermodynamic limit .* 18 * ( 2001 ) , 4997 - 5033 .[ hep - th/0005235 ] + h. sahlmann , t. thiemann and o. winkler .coherent states for canonical quantum general relativity and the infinite tensor product extension .phys . _ * b606 * ( 2001 ) , 401 - 440 .[ gr - qc/0102038 ] a. ashtekar and c.j .representations of the holonomy algebras of gravity and non - abelian gauge theories . _ class .grav . _ * 9 * ( 1992 ) , 1433 .[ hep - th/9202053 ] + a. ashtekar and j. lewandowski .representation theory of analytic holonomy algebras . in_ knots and quantum gravity _ , j. baez ( ed . ) , ( oxford university press , oxford 1994 ) . [ gr - qc/9311010 ] j. lewandowski , a. okolow , h. sahlmann and t. thiemann .uniqueness of diffeomorphism invariant states on holonomy flux algebras .* 267 * ( 2006 ) , 703 - 733 .[ gr - qc/0504147 ] + c. fleischhack .representations of the weyl algebra in quantum geometry .[ math - ph/0407006 ] j. brunnemann and d. rideout .spectral analysis of the volume operator in loop quantum gravity .[ gr - qc/0612147 ] + j. brunnemann and d. rideout .properties of the volume operator in loop quantum gravity . i. results . _ class . quant .* 25 * ( 2008 ) , 065001 .[ arxiv:0706.0469 [ gr - qc ] ] + j. brunnemann and d. rideout .properties of the volume operator in loop quantum gravity ii : detailed presentation .[ arxiv:0706.0382 ] h. sahlmann and t. thiemann . towards the qft on curved spacetime limit of qgr .1 . a general scheme .* 23 * ( 2006 ) , 867 - 908 .[ gr - qc/0207030 ] + h. sahlmann and t. thiemann . towards the qft on curved spacetime limit of qgr .2 . a concrete implementation . _ class . quant .* 23 * ( 2006 ) , 909 - 954 .[ gr - qc/0207031 ] k. giesel and t. thiemann .algebraic quantum gravity ( aqg ) .i. conceptual setup . _ class .* 24 * ( 2007 ) , 2465 - 2498 .[ gr - qc/0607099 ] + k. giesel and t. thiemann .algebraic quantum gravity ( aqg ) .ii . semiclassical analysis . _ class .* 24 * ( 2007 ) , 2499 - 2564 .[ gr - qc/0607100 ] m. varadarajan .fock representations from u(1 ) holonomy algebras ._ * d61 * ( 2000 ) , 104001 .[ gr - qc/0001050 ] + m. varadarajan .photons from quantised electric flux representations ._ * d64 * ( 2001 ) , 104003 .[ gr - qc/0104051 ] + m. varadarajan .gravitons from a loop representation of linearised gravity .* d66 * ( 2002 ) , 024017 .[ gr - qc/0204067 ] + m. varadarajan .the graviton vacuum as a distributional state in kinematic loop quantum gravity . _ class .* 22 * ( 2005 ) , 1207 - 1238 .[ gr - qc/0410120 ] e. livine and s. speziale .a new spinfoam vertex for quantum gravity .* d76 * ( 2007 ) , 084028 . [ arxiv:0705.0674 [ gr - qc ] ] + j. engle , r. pereira and c. rovelli . the loop - quantum - gravity vertex - amplitude . _ phys .* 99 * ( 2007 ) , 161301 .[ arxiv:0705.2388 [ gr - qc ] ] + e. livine and s. speziale .consistently solving the simplicity constraints for spinfoam quantum gravity .[ arxiv:0708.1915 [ gr - qc ] ] + j. engle , r. pereira and c. rovelli .flipped spinfoam vertex and loop gravity .* b798 * ( 2008 ) , 251 - 290 .[ arxiv:0708.1236 [ gr - qc ] ] + l. freidel and k. krasnov . a new spin foam model for 4d gravity .[ e - print : arxiv:0708.1595 [ gr - qc ] ] + j. engle , e. livine , r. pereira and c. rovelli .lqg vertex with finite immirzi parameter .phys . _ * b799 * ( 2008 ) , 136 - 149 . [ arxiv:0711.0146 [ gr - qc ] ] j. w. barrett and l. crane .relativistic spin networks and quantum gravity . _ j. math .* 39 * ( 1998 ) , 3296 - 3302 .[ gr - qc/9709028 ] + j. w. barrett and l. crane . a lorentzian signature model for quantum general relativity .* 17 * ( 2000 ) , 3101 - 3118 .[ gr - qc/9904025 ] h. sahlmann and t. thiemann .irreducibility of the ashtekar lewandowski representation .* 23 * ( 2006 ) , 4453 - 4472 .[ gr - qc/0303074 ] + c. fleischhack .irreducibility of the weyl algebra in loop quantum gravity .* 97 * ( 2006 ) , 061302 .e. alesci and c. rovelli .the complete lqg propagator .i. difficulties with the barrett - crane vertex .* d76 * ( 2007 ) , 104012 . [ arxiv:0708.0883 [ gr - qc ] ] + e. alesci and c. rovelli . the complete lqg propagator .asymptotic behavior of the vertex ._ phys.rev ._ * d77 * ( 2008 ) , 044024 .[ arxiv:0711.1284 [ gr - qc ] ] + v. bonzom , e. livine , m. smerlak and s. speziale + towards the graviton from spinfoams : the complete perturbative expansion of the 3d toy model .[ arxiv:0802.3983 [ gr - qc ] ] a. ashtekar , j. lewandowski , d. marolf , j. mouro and t. thiemann .quantisation of diffeomorphism invariant theories of connections with local degrees of freedom .* 36 * ( 1995 ) , 6456 - 6493 .[ gr - qc/9504018 ] b. bahr and t. thiemann .gauge - invariant coherent states for loop quantum gravity . i. abelian gauge groups .[ arxiv:0709.4619 [ gr - qc ] ] + b. bahr and t. thiemann .gauge - invariant coherent states for loop quantum gravity .non - abelian gauge groups .[ arxiv:0709.4636 [ gr - qc ] ] j. velhinho .a groupoid approach to spaces of generalised connections ._ j. geom .* 41 * ( 2002 ) , 166 - 180 .[ hep - th/0011200 ] + j. velhinho . on the structure of the space of generalised connections .phys . _ * 1 * ( 2004 ) , 311 - 334 .[ math - ph/0402060 ] + b. bahr and t. thiemann , automorphisms in loop quantum gravity .[ arxiv:0711.0373 [ gr - qc ] ] | we continue the semiclassical analysis of the loop quantum gravity ( lqg ) volume operator that was started in the companion paper . in the first paper we prepared the technical tools , in particular the use of complexifier coherent states that use squares of flux operators as the complexifier . in this paper , the complexifier is chosen for the first time to involve squares of area operators . both cases use coherent states that depend on a graph . however , the basic difference between the two choices of complexifier is that in the first case the set of surfaces involved is discrete , while , in the second it is continuous . this raises the important question of whether the second set of states has improved invariance properties with respect to relative orientation of the chosen graph in the set of surfaces on which the complexifier depends . in this paper , we examine this question in detail , including a semiclassical analysis . the main result is that we obtain the correct semiclassical properties of the volume operator for i ) artificial rescaling of the coherent state label ; and ii ) particular orientations of the 4- and 6-valent graphs that have measure zero in the group so(3 ) . since such requirements are not present when analysing dual cell complex states , we conclude that coherent states whose complexifiers are squares of area operators are not an appropriate tool with which to analyse the semiclassical properties of the volume operator . moreover , if one intends to go further and sample over graphs in order to obtain embedding independence , then the area complexifier coherent states should be ruled out altogether as semiclassical states . _ i would like to dedicate this paper to my parents elena romani and luciano flori _ |
accessible information about a quantum system is restricted by the noncommutability of observables .the nature of this restriction can be classified essentially into two categories : fluctuations inherent in a quantum system and the error caused by the process of measurement .these aspects of uncertainty constitute the two distinctive features of quantum mechanics .the kennard - robertson uncertainty relation such as describes quantum fluctuations that are independent of the measurement process .according to bell s theorem , this type of quantum fluctuations prohibits us from presupposing any `` element of reality '' behind the probability distributions of observables . the measurement error , on the other hand, is determined by the process of measurement which is characterized by a positive operator - valued measure ( povm ) . in the idealized error - free limit, quantum measurement is described by projection operators which , however , can not always be implemented experimentally .information about more than one observable can be obtained from a single povm in simultaneous measurement of two noncommuting observables and quantum - state tomography .it is known , however , that , in simultaneous measurements , at least one of the observables can not be measured without incurring a measurement error . in this context , various uncertainty relations between the measurement errors of noncommuting observables have been studied . in this paper , we quantify the measurement accuracy and the measurement error of observables in terms of a given povm by introducing accuracy matrix calculated from the povm . based on this accuracy matrix , we derive trade - off relations between the measurement accuracy of two or three observables , these being stronger trade - off relations than those derived in our previous work . they can be interpreted as the uncertainty relations between the measurement errors of noncommuting observables in simultaneous measurements or as the uncertainty relations between the measurement error and back - action of the measurement .in addition , a no - cloning inequality is derived from the trade - off relations . in a rather different context, the maximum - likelihood estimation has been investigated as the standard scheme of quantum state tomography for a finite number of samples .several studies have focused on the efficiency and optimality of the estimation of an unknown quantum state .we show that our characterization of the measurement accuracy can be related to the maximum - likelihood estimation and that the accuracy matrix can be interpreted as an average of the fisher information matrix over the state to be measured .the trade - off relations can also be interpreted as those concerning the accuracy of the estimate of various probability distributions of noncommuting observables .the constitution of this paper is as follows . in sec .ii , we formulate the general quantum measurement of a qubit ( spin-1/2 ) system . in sec .iii , we define the accuracy matrix and investigate its properties . based on this accuracy matrix , we define the accuracy parameter and error parameter in a particular direction of measurement . in sec .iv , we derive the trade - off relations between the accuracy parameters or the error parameters in two or three directions . in sec .v , we apply the trade - off relations to specific problems : the uncertainty relations between measurement errors in nonideal joint measurements , the uncertainty relations between the error and back - action , a no - cloning inequality , and quantum state tomography . in sec .vi , we point out a close connection between the accuracy matrix and the fisher information matrix .we conclude this paper in sec .we consider a quantum measurement described by povm ( ) on state of a qubit system , where denotes the outcome of the measurement .povm satisfies , with being the identity operator , and can be parameterized as where represents the pauli matrices .the requirements that the sum of s equals the identity operator and that all of them be nonnegative are met if and only if we can also parameterize density operator as where is the bloch vector satisfying .conversely , for a given , is calculated as . the probability of obtaining the measurement outcome is then given by any observable of the qubit system can be diagonalized as where and are the corresponding eigenvalues , and are projection operators with being a three - dimensional unit vector , and the probability distribution of observable then given by if we are not interested in eigenvalues of the observables but are only concerned with the directions ( ) of the outcome , we can replace with by setting . in the following analysis, we identify observable with the observable and refer to the probability distribution in eq .( [ distribution ] ) as that in the direction of .we discuss three typical examples .* example 1 * ( _ projection measurement _ ) .we can precisely measure by the projection measurement described by the povm . * example 2 * ( _ nonideal measurement _ ). a more general class of measurements can be described by the povm consisting of two positive operators parametrized as where is a unit vector , , , , and .this povm corresponds to a nonideal measurement of the observable .it can be reduced to a projection measurement if and only if and . on the other hand ,the povm is trivial ( i.e. , and ) if and only if ; then we can not obtain any information about .equations ( [ nonideal1 ] ) can be rewritten as where is the transition - probability matrix which satisfies and .note that describes a binary symmetric channel if and only if and .it follows from eq .( [ projection - povm ] ) that any measurement process described by a povm consisting of two positive operators is formally equivalent to a measurement process in which a classical error is added to the projection measurement .the physical origin of this error , however , lies in the quantum - mechanical interaction .* example 3 * ( _ probabilistic measurement _ ) .suppose that a nonideal measurement of is performed with probability ( ) and that is performed with probability .the povm corresponding to this probabilistic measurement consists of four operators : as the number of measured samples increases , this measurement asymptotically approaches the measurements on identically prepared samples which are divided into two groups in the ratio , with being measured for the first group and for the second group .other important examples such as nonideal joint measurements and quantum state tomography are discussed in sec .we will characterize the accuracy of an arbitrary observable in such a manner that it depends only on the process of measurement and not on the measured state .we first define the accuracy matrix . * definition 1 * ( _ accuracy matrix _ ) . the accuracy matrix characterizing the measurement accuracy of observables in terms of the povm defined as where denotes the component of the real vector and shows indices of matrix elements of .we introduce the notation with as that is , denotes the transposed vector of and denotes the projection matrix onto direction in whose matrix element is given by . we can then rewrite ( [ am1 ] ) in matrix form as note that is positive semidefinite and hermitian , and can therefore be diagonalized by an orthonormal transformation .the physical meaning and useful properties of the accuracy matrix will be investigated subsequently , and its foundation from an information - theoretic point of view will be established in terms of the maximum - likelihood estimation of the probability distribution of observables in sec . vi .in fact , the accuracy matrix is closely related to fisher information matrix ( [ fisher1 ] ) or ( [ fisher2 ] ) , although physical quantities such as the measurement error can be directly derived from the accuracy matrix without resort to fisher information .noting that , we can obtain the following fundamental inequality which forms the basis of trade - off relations to be discussed later .* theorem 1 .* three eigenvalues of satisfy or equivalently , where we denote the trace of the matrix as to reserve symbol for the trace of a quantum - mechanical matrix .the equality , or , holds if and only if for all .the following corollary follows from the positivity of .* corollary 1 . *the accuracy matrix satisfies the following matrix inequality : where is the identity matrix , and means that all eigenvalues of are nonnegative .the following examples illustrate the physical meaning of the accuracy matrix .we first consider a nonideal quantum measurement ( see also example 2 in sec .ii ) . we can rewrite eq.([nonideal1 ] ) as where and .the accuracy matrix can then be represented by where is the eigenvalue of corresponding to the eigenvector , and is given by we can also write in terms of the transition - probability matrix introduced in eq .( [ transition ] ) as the accuracy parameter satisfies where holds if and only if and ; that is , describes the projection measurement of observable . note that holds in this case . on the other hand, holds if and only if . in this case , holds , and we can not obtain any information about .the nonzero eigenvalue thus characterizes the measurement accuracy of ; the larger , the more information we can extract about from the measurement outcome .these properties can be generalized for an arbitrary povm as shown below .another example is the probabilistic measurement of two noncommuting observables ( see example 3 in sec .we consider the nonideal measurement of whose accuracy matrix is and that of whose accuracy matrix is .the accuracy matrix of the probabilistic measurement is given by this representation suggests that the measurement accuracy concerning is degraded by a factor of compared with the single nonideal measurement of , because we can not observe with probability .a similar argument applies to as well .equation ( [ accuracy1 ] ) shows that is the linear combination of the accuracy matrices of povms measuring and , where the coefficients and give the probabilities of measuring and , respectively .this can be generalized as follows .let us consider three povms : , , and with .the povm describes the probabilistic measurement of with probability and that of with probability . according to the definition of the accuracy matrix, we have we thus obtain the following theorem .* theorem 2 * ( _ linearity _ ) : or more symbolically , note that we can take as a scalar measure of the measurement accuracy the largest eigenvalue of the accuracy matrix which we denote as .it satisfies , where holds if and only if describes the projection measurement of a particular direction and if and only if the povm is trivial : , where is the identity operator and denotes the probability of finding outcome , with . we may alternatively choose the scalar measure to be ; it has the linear property from theorem 2 and satisfies , where if and only if the povm is trivial .we next parametrize the measurement accuracy of a particular observable .we denote the support of as ; that is , is the subspace of spanned by all eigenvectors of with nonzero eigenvalues .* definition 2 * ( _ measurement accuracy _ ) .the accuracy parameter in direction is defined as where is assumed to act only on subspace .if , we set .this definition is closely related to the fisher information concerning a particular direction defined in eq .( [ fisher3 ] ) . * definition 3 * ( _ measurement error _ ) the error parameter of the measurement in direction is defined as the parameters and satisfy the following inequalities .* theorem 3 * : the equality , or equivalently , holds if and only if the measurement described by is equivalent to a projection measurement in direction . in this case , the measurement involves no measurement error .the other limit of , or equivalently , holds if and only if . in this case, we can not obtain any information about direction from the measurement .* proof * since commutes with the identity operator , we can show that from inequality ( [ trade - off2 ] ) in corollary 1 .we thus obtain inequalities ( [ inequality1 ] ) and ( [ inequality2 ] ) are the direct consequences of this inequality . the condition that and hold follows from the definitions of and .we next show the condition that and hold . if is the projection measurement in direction , then .conversely , from inequality ( [ trade - off1 ] ) , it can be shown that if and hold , then is the eigenvector corresponding to eigenvalue and that the other two eigenvalues are .it follows from the condition of equality in theorem 1 that for all .therefore , without loss of generality , we can write the povm as where , because and hold .we define two operators as then describes the projection measurement in direction . let , , and be the eigenvectors of , and , , and be the corresponding eigenvalues .it can be shown that according to theorem 1 , we can not simultaneously measure the three directions corresponding to the eigenvectors with the maximum accuracy for all ; the trade - off relation ( [ trade - off1 ] ) or ( [ trade - off1 ] ) is equivalent to this trade - off relation represents the uncertainty relation between the measurement errors in the three directions .we define that the povm is optimal if and only if ; that is , reaches the upper bound of trade - off relation ( [ trade - off1 ] ) , ( [ trade - off1 ] ) , or ( [ trade - off1 ] ) . on the other hand, we define that is symmetric if and only if holds for any and . in this case, is proportional to the identity matrix .we next introduce the concept of `` reconstructive subspace '' and `` reconstructive direction . ''the following theorem can be directly shown from the definition of the accuracy matrix .* theorem 4 * corresponds to the subspace spanned by the set of basis vectors of the accuracy matrix ( [ am2 ] ) .suppose that we perform the measurement and obtain the probability distribution for each outcome .can we then reconstruct the premeasurement distribution of the system from ?the answer is given by the following theorem .* theorem 5 * ( _ reconstructive subspace and reconstructive direction _ ) we can reconstruct the probability distribution from the measured distribution if and only if .we thus refer to as a reconstructive subspace and to a unit vector in as a reconstructive direction .* proof * we can show from eq .( [ qk ] ) that where is a matrix : let and be the kernel and image of , respectively. it can easily be shown that .let us introduce the equivalence relation `` '' as .we denote the equivalence class of as ] is an element of the quotient space . from the homomorphism theorem ,the quotient map is a linear isomorphism from to . noting that , we obtain = ( m/ \sim ) ^{-1 } \left ( \left ( \begin{array}{c } q_1\\ \vdots \\q_m \end{array }\right ) - \left ( \begin{array}{c } r_1 \\ r_2 \\ \vdots \\ r_m \end{array }\right ) \right).\ ] ] by taking a representative $ ] , we can reconstruct as which gives through eq .( [ distribution]). we consider the nonideal measurement of with the povm in eq .( [ nonideal1 ] ) .the nonideal measurement is characterized with the accuracy matrix . in this case, we can show that .it follows that and for .we next consider the probabilistic measurement of and in example 3 in sec .the probabilistic measurement is characterized with the accuracy matrix of the joint povm given in ( [ accuracy1 ] ) , so the reconstructive subspace is two dimensional : .a straightforward calculation shows that if the classical noise described by a transition - probability matrix is added to the measurement outcomes , the measurement accuracy should deteriorate .this fact can be expressed as a data processing inequality . *theorem 6 * ( _ data processing inequality _ ) .suppose that two povms and are related to each other by where is an transition - probability matrix satisfying .it then follows that where matrix inequality ( [ data - processing ] ) means that all the eigenvalues of are non - negative . * proof * we can parametrize the povms as where . introducing the function , with arbitrary vector as we can show that the hessian of , which is defined as , becomes so that is a concave function. therefore holds for any satisfying and for all .taking , inequality ( [ con2 ] ) becomes or equivalently , noting that , we obtain which implies that since ( [ 52 ] ) holds for arbitrary , we obtain ( [ data - processing]). the following corollary is a direct consequence of the foregoing theorem .* corollary 6 * suppose that is obtained by a coarse graining of : , and , with . then holds .inequality ( [ grain ] ) means that the measurement accuracy in any direction is decreased by a coarse graining .we can also express the data processing inequality in terms of the accuracy parameter in an arbitrary direction .* theorem 7 * ( _ data processing inequality _ ) .we consider the povms and satisfying eq .( [ data - processing2 ] ) .suppose that .then , holds , or equivalently , holds for arbitrary .* proof * let , , and be the eigenvalues of , and , , and be the corresponding eigenvectors .similarly , let , , and be the eigenvalues of , and , , and be the corresponding eigenvectors .it follows from the data processing inequality ( [ data - processing ] ) that for , where . applying the concave inequality to , we obtain for arbitrary , we can show that which implies ( [ data - processing3 ] ) and ( [ data - processing4]).we now derive general trade - off relations between the measurement errors of noncommuting observables , which are the main results of this paper .let , , and be the respective eigenvectors of corresponding to the eigenvalues , , and , where ( ) .we define the error parameters as .inequality ( [ trade - off1 ] ) or ( [ trade - off1 ] ) in theorem 1 can be rewritten in terms of the error parameters as considering two eigenvalues alone ( i.e. , ) , we can simplify the trade - off relation : the trade - off relations ( [ trade - off1 ] ) and ( [ trade - off1 ] ) can be generalized to the case of arbitrary directions .we first consider the case of two observables .* theorem 8 * ( _ trade - off relation _ ) .we consider a simultaneous measurement in two directions and ( ) described by the povm .we assume and , and define and ( ) . then the trade - off relation or equivalently, holds . * proof .* we divide the proof into two steps .we consider a situation in which both and lie in a plane spanned by two eigenvectors . without loss of generality ,we choose and as the two eigenvectors , and expand and as where .it can be shown that applying the cauchy - schwarz inequality and , we obtain the equality holds if and only if and . in the case of ( i.e. , the measurement errors are symmetric ) , the equality holds if and only if ._ we next consider a more general case .we choose an orthonormal basis such that both and are in the plane spanned by and .we introduce the notation ( ) .let be a orthogonal matrix which transforms into .it can be shown that note that because is an orthogonal matrix , and that the function is concave .it follows from a concave inequality that combining this with , we obtain therefore this inequality means that , or equivalently , we can derive inequality ( [ trade - off3 ] ) by following the same procedure as in step 1 .we can directly derive inequality ( [ trade - off4 ] ) from ( [ trade - off3]). we note that the equalities in ( [ trade - off3 ] ) and ( [ trade - off4 ] ) hold in the case that the povm is given by ( ) , where and .the accessible regime for and is illustrated in fig.1 for the case of , , and .note that regime q can be reached only through simultaneous measurement for the case of ., the union of p and q indicate the regime satisfying inequality ( [ trade - off4 ] ) for the case of , and the union of p , q and r indicate the regime satisfying the inequality for the case of .we can only access regime q through simultaneous measurement for the case of .,width=264 ] the trade - off relation can be interpreted as the uncertainty relation between measurement errors .it offers a rigorous representation of bohr s principle of complementarity which dictates `` the mutual exclusion of any two experimental procedures '' when we measure two noncommuting observables simultaneously .the trade - off relation between three observables can be formulated as follows . * theorem 9 * we consider a simultaneous measurement in three directions , , and described by the povm .let us assume that , , and are linearly independent .we set the notation and , where .then the inequality holds . the equality in ( [ trade - off5 ] ) holds if and only if and are orthogonal .* proof * introducing the notation where ( ) , it can be shown that we thus obtain we can show from ; therefore we obtain ( [ trade - off5]).we have discussed in sec .iv trade - off relations ( [ trade - off3 ] ) and ( [ trade - off4 ] ) which describe the uncertainty relations in generalized simultaneous measurements . in this section ,we discuss possible applications of these trade - off relations .we consider a class of simultaneous measurements called nonideal joint measurements , where two observables and are simultaneously measured . since their eigenvalues are , each measurement should give a pair of outcomes ( ) for observables and . the joint povm can be parametrized as the marginal povms ( ) are defined by and can be parametrized by where this simultaneous measurement can be regarded as a nonideal joint measurement if and only if the marginal povm corresponds to the nonideal measurement of , that is , we can define the transition - probability matrices of as in eq .( [ transition ] ) : in this case , we can calculate the accuracy of and by two different methods . one method to calculate the accuracy parameter is based on the joint povm : where .the other is based on the marginal povm : note that .these two accuracy parameters are equivalent as shown by the following theorem .* theorem 10 * : the proof of theorem 10 is given in the appendix .for , we can show that is not an element of ; therefore , . on the other hand, holds by definition .we can thus obtain the following corollary .* corollary 10 * for arbitrary , we next discuss the relationship between the present work and our earlier work for the case of nonideal joint measurement . in ref . , we have introduced the accuracy parameter and error parameter as on the other hand , the accuracy parameter and error parameter in the present paper are given by it can be easily shown that so the trade - off relations derived in the present paper are stronger than our previous ones ( and ) derived in ref .the latter trade - off relations can thus be derived from those obtained in the present paper .we have interpreted trade - off relation ( [ trade - off3 ] ) as the uncertainty relation between the measurement errors . in this subsection , we show that it can be interpreted as the uncertainty relation between the measurement error and back - action of the measurement . let us suppose that is a state immediately after the measurement of for the premeasurement state .if the measurement of is described by measurement operators , we can write as . for simplicity , we assume that the number of measurement outcomes is : . to identify the disturbance of caused by the measurement of , we consider how much information about for the premeasurement state remains in post - measurement state .we characterize this by considering how much information on can be obtained by performing the projection measurement of for .note that we can regard the projection measurement of on described by the povm as the measurement of described by the povm , where .the joint operation of measurement followed by measurement can be described by a povm , where we can construct the marginal povms as it is possible to interpret as a measure of the back - action of caused by measurement of . defining the measurement error of as and the back - action of the measurement on as , we can obtain the trade - off relation between the error and back - action based on inequality ( [ trade - off3 ] ) .* theorem 11 * ( _ uncertainty relation between measurement error and back - action _ ) . we note that a non - selective measurement process for can simulate the decoherence caused by the environment . in this case , the trade - off relation ( [ e - b - trade - off ] ) gives a lower bound on the back - action of in the presence of decoherence characterized by .another application of the trade - off relation is the derivation of a no - cloning inequality .we consider a quantum cloning process from qubit system to qubit system described as follows : let be an unknown density operator of system to be cloned , be that of system as a blank reference state , and be that of the environment .the density operator of the total system is initially given by , and becomes after unitary evolution .we define and . we can write and in the operator - sum representation as and . the no - cloning theorem states that there exists no unitary operator that satisfies for arbitrary input state . if is the identity operator , then all information about remains in system , and no information is transferred into system ; and . as another special case , if describes the swapping operation between and ( i.e. , ), then all information about is transferred into with no information left in .intermediate cases between the identity operation and the swapping operation can be quantitatively analyzed by the no - cloning inequality .we derive here another simple no - cloning inequality based on the trade - off relation .we first consider how much information about remains in .we can characterize this by considering how much information about of can be obtained by the measurement of on .we can regard the measurement of on as the measurement described by the povm on , where .we can thus characterize the amount of information that remains in by the accuracy parameter .similarly , we can consider how much information about is transferred into .we characterize this by considering how much information about of can be obtained by the measurement of on .we can regard the measurement of on as the measurement described by the povm on .we thus characterize the amount of information which is transferred from to by the accuracy parameter . for mathematical convenience , we use and , instead of and , to derive our no - cloning inequality .the amount of information about which remains in is characterized by averaged over all directions , and the amount of information about which is transferred into is characterized by averaged over all directions .* definition 4 * ( _ cloning parameter _ ) .we define the cloning parameters and as since and , the cloning parameters satisfy the cloning parameters depend only on , , and , and characterize the performance of the cloning machine .the smaller is , the more information about remains in system , while the smaller is , the more information about is transferred into system by the cloning machine .for example , if is the identity operator , then and hold , which implies that all information about is left in system . on the other hand ,if describes the swapping operation between and , then and hold . for intermediate cases between them ,the following no - cloning inequality between and can be derived from trade - off relation ( [ trade - off3 ] ) .* theorem 12 * ( _ no - cloning inequality _ ) : * proof .* it can be shown that there exists a povm , with , satisfying we can also show that and are its marginal povms . from inequality ( [ data - processing5 ] ) and the trade - off relation ( [ trade - off3 ] ) , we obtain averaging ( [ c ] ) over all directions and using we obtain ( [ no - cloning]). inequality ( [ no - cloning ] ) represents the trade - off relation between the information remaining in the original system and the information transferred to the reference system .the impossibility of achieving implies the no - cloning theorem . note that if , then , which implies that if a cloning machine transfers all of the information about into system , then no information can be left in system .we next apply our framework to quantum - state tomography . as shown in sec .vi , characterization of the measurement accuracy by the accuracy matrix is closely related to the asymptotic accuracy of the maximum - likelihood estimation which is considered to be the standard scheme for quantum - state tomography .we first consider the standard strategy to estimate the three components of bloch vector .we divide identically prepared samples into three groups in the ratio , and measure for the first group , for the second group , and for the third group .as increases , this scheme becomes asymptotically described by povm consisting of six operators : we can reconstruct the quantum state by quantum - state tomography and hence reconstruct the probability distributions in all directions .in fact , the accuracy matrix for the standard tomography ( [ tomography1 ] ) is given by which attains the upper bound of the inequality .this expression manifestly shows that the reconstructive subspace of the standard quantum state tomography is and that the accuracy of the tomography is optimal and symmetric in the sense discussed in sec .iii b. we next consider the minimal qubit tomography ._ have shown that the following four measured probabilities are just enough to estimate the bloch vector : where the minimal qubit tomography is also optimal and symmetric , in the sense that the corresponding accuracy matrix is again given by ( [ tomography - accuracy ] ) .note that the povm satisfying can be regarded as tomographically complete .in this section , we point out a close connection between the accuracy matrix and the fisher information .we consider the quantum measurements described by the povm for each of ( ) samples prepared in the same unknown state .note that . our task is to estimate the bloch vector by maximum - likelihood estimation .we denote as the maximum - likelihood estimator of from measurement outcomes .the asymptotic accuracy of maximum - likelihood estimation is characterized by the fisher information . in our situation, the fisher information takes the matrix form given by or equivalently , note that is a positive and hermitian matrix , and that the support of coincides with that of . focusing on a particular direction , we can reduce the fisher information content to the greater the fisher information , the more information we can extract from the measurement outcome . in the case of , the variance of the estimator diverges , so we can not gain any information about the probability distribution in direction .this is the case of not being in any reconstructive direction . replacing by in the fisher information ( [ fisher1 ] ) or ( [ fisher2 ] ) ,we can obtain the accuracy matrix in eq .( [ am1 ] ) or ( [ am2 ] ) .note that is the average of over the entire bloch sphere .the trade - off relations ( [ trade - off3 ] ) , ( [ trade - off4 ] ) , and ( [ trade - off5 ] ) can thus be interpreted as the trade - off relations between the asymptotic accuracy of the maximum - likelihood estimation of the probability distributions of observables .a finite number of samples only gives us imperfect information about the probability distribution of an observable for an unknown state .as we have shown , this imperfection further deteriorates in the case of simultaneous estimation due to the noncommutability of the observables .figure 2 shows the results of simulations for the value of the maximum - likelihood estimators ( red curves ) and ( blue curves ) in the the case of an optimal nonideal joint povm which satisfies the equality in ( [ trade - off3 ] ) or ( [ trade - off4 ] ) with and ( red curves ) and ( blue curves ) for the case of , , and .the abscissa indicates sample number and the ordinate indicates the value of the estimators .the number of simulations is .,width=264 ] let us next consider a simple estimation scheme by dividing prepared samples into two groups in the ratio and performing a nonideal measurement of by the povm with accuracy for the former group , and similarly we perform a nonideal measurement of by the povm with accuracy for the latter group ( see also example 3 in sec .this measurement can asymptotically be described by the povm whose accuracy matrix is from eq.([accuracy2 ] ) in sec .ii , the accuracy parameters in directions and are given by and thus we can therefore conclude that a simultaneous measurement has the advantage over this simple method in that the former can access the domain for , i.e. , domain q in fig.1 .projection measurements can not always be implemented experimentally .this raises the question of how accurately we can obtain information about observables from a given imperfect measurement scheme . to quantitatively characterize such measurement accuracy , we have introduced the accuracy matrix , with being the corresponding povm .we have considered the accuracy matrix of the most general class of measurements of a qubit system : generalized simultaneous measurements including nonideal joint measurements and quantum - state tomography . from the outcomes of generalized simultaneous measurements ,we can obtain information about more than one observable . in terms of the accuracy matrix ,we have defined accuracy parameter and error parameter for a direction of corresponding to the observable .these parameters satisfy and .if , or equivalently , the measurement is equivalent to the projection measurement of . on the other hand ,if , or equivalently , we can not obtain any information about the measured system by this measurement .the accuracy matrix and accuracy parameters give us information about observables for which we can reconstruct the probability distribution from the measured distribution , where .in fact , we can reconstruct the probability distribution of observable if and only if , or equivalently . in other words ,the direction is a reconstructive direction if and only if , where the subspace of is spanned by the eigenvectors of corresponding to nonzero eigenvalues .the main results of this paper are trade - off relations ( [ trade - off3 ] ) , ( [ trade - off4 ] ) , and ( [ trade - off5 ] ) between the accuracy parameters and the error parameters .we can interpret them as the uncertainty relations between measurement errors in generalized simultaneous measurements ; the more information we obtain about an observable , the less information we can access about the other noncommuting observable . trade - off relation ( [ trade - off3 ] ) can also be interpreted as the uncertainty relation between the measurement error and back - action of measurement as formulated in inequality ( [ e - b - trade - off ] ) .the new no - cloning inequality in ( [ no - cloning ] ) is derived from the trade - off relations . to derive this ,we have introduced the cloning parameters and , where indicates the system to be cloned and indicates the blank reference system .let be the pre - cloned state of system .after a cloning operation , all the information about remains in system if and only if , and the information about is completely transferred to system if and only if . the impossibility of attaining implies the no - cloning theorem .the condition of the equality in our no - cloning inequality ( [ no - cloning ] ) has yet to be understood .we have also applied the trade - off relations to analyze the efficiency of quantum - state tomography .the accuracy matrix of the standard qubit - state tomography or the minimal qubit tomography is given by with being the identity matrix , which implies that the efficiency of quantum - state tomography is optimal and symmetric .we have pointed out a close relationship between the accuracy matrix and the fisher information .we have also shown that the trade - off relations can be interpreted as being those concerning the accuracy of the maximum - likelihood estimators of the probability distributions of noncommuting observables .while we focus on the spin-1/2 system in the present paper , many results can be generalized for higher - dimensional systems .we conclude this paper by outlining such generalization . in the case of a -dimensional system ( ) , the parametrization of the hermitian operator is given by where is a real number , is a -dimensional real vector , and is the elements of the lie algebra of su( ) satisfying and with being the kronecker delta .the necessary and sufficient condition for to be a positive operator is given by and ( ) , where is an - degree polynomial for .the condition for is given by , which is equivalent to for , is given by where is defined as with .we note that if is a rank- projection operator , then . however , the hermitian operator with is not necessarily a positive operator .the accuracy matrix for a -dimensional system assumes the same form as eq .( [ am2 ] ) using parametrization ( [ d - parameterize ] ) . in this case, is a square matrix .moreover , we can define the accuracy parameter and the error parameter according to eqs .( [ a - parameter ] ) and ( [ e - parameter ] ) , respectively . using condition ( [ positivity2 ] ), we can derive trade - off relations ( [ trade - off3 ] ) and ( [ trade - off4 ] ) for a -dimensional system . in this sense ,the trade - off relations serve as universal uncertainty relations holding true for all finite - dimensional systems .however , bounds of trade - off relations ( [ trade - off3 ] ) and ( [ trade - off4 ] ) would not necessarily be able to be reached for , because and are not sufficient for positivity of the povm .moreover , while the accuracy parameter for characterizes the measurement accuracy of spin observables , the accuracy parameter for can not characterize the measurement accuracy of , for example , the spin- observable ; it only characterizes the accuracy of a rank- projection operator .therefore the results of this paper based on can not be applied straightforwardly for .a full investigation of this problem is underway .we prove the case of .for simplicity of notation , we define that , , , and .the accuracy matrix is given by and the accuracy parameter in direction is the marginal povm is and the marginal accuracy matrix is where our objective is to show that . for simplicity, we introduce the notation we can then write as where using eq .( [ a1 ] ) , we can calculate the determinant of : ^ 2 .\end{split}\ ] ] on the other hand , the cofactor matrix of is therefore the inverse matrix is given by ^ 2}.\ ] ] noting that ^ 2 \\ & = ( r_1 + r_2 ) \left ( \bigl [ \bm a_1 \cdot ( \bm a_2 \times \bm a_3 ) \bigr]^2 + \bigl [ \bm a_1 \cdot ( \bm a_2 \times \bm a_4 ) \bigr]^2 \right ) \end{split}\ ] ] and we obtain ^ 2 + \bigl [ \bm a_1 \cdot ( \bm a_2 \times \bm a_4 ) \bigr]^2 } { \det \chi ( \textbf{e})}. \end{split}\ ] ] similarly , we can show that ^ 2 + \bigl [ \bm a_3 \cdot ( \bm a_4 \times \bm a_2 ) \bigr]^2 } { \det \chi ( \textbf{e})}. \end{split}\ ] ] let us define ^ 2 + \bigl [ \bm a_1 \cdot ( \bm a_2 \times \bm a_4 ) \bigr]^2 } { \det \chi ( \textbf{e})},\ ] ] ^ 2 + \bigl [ \bm a_3 \cdot ( \bm a_4 \times \bm a_2 ) \bigr]^2 } { \det \chi ( \textbf{e } ) } , \ ] ] , and .noting that and we obtain we can thus conclude therefore which is our objective .this work was supported by a grant - in - aid for scientific research ( grant no .17071005 ) and by a 21st century coe program at tokyo tech , `` nanometer - scale quantum physics '' , from the ministry of education , culture , sports , science and technology of japan . | we formulate the accuracy of a quantum measurement for a qubit ( spin-1/2 ) system in terms of a 3 by 3 matrix . this matrix , which we refer to as the accuracy matrix , can be calculated from a positive operator - valued measure ( povm ) corresponding to the quantum measurement . based on the accuracy matrix , we derive trade - off relations between the measurement accuracy of two or three noncommuting observables of a qubit system . these trade - off relations offer a quantitative information - theoretic representation of bohr s principle of complementarity . they can be interpreted as the uncertainty relations between measurement errors in simultaneous measurements and also as the trade - off relations between the measurement error and back - action of the measurement . a no - cloning inequality is derived from the trade - off relations . furthermore , our formulation and the results obtained can be applied to analyze quantum - state tomography . we also show that the accuracy matrix is closely related to the maximum - likelihood estimation and the fisher information matrix for a finite number of samples ; the accuracy matrix tells us how accurately we can estimate the probability distributions of observables of an unknown state by a finite number of quantum measurements . |
this paper generalizes the discrete memoryless channel ( dmc ) coding theorem for multiple codebooks of .similar models have been analyzed recently in the context of random access communication , see , , ( typically for multiple access scenarios , not entered here ) , and unequal error protection .it is assumed that the sender has a codebook library consisting of several codebooks .each codebook consists of codewords of the same length and type . as a new feature compared to , here not only the type but also the length of the codewords may vary across codebooks , thus a model in between fixed length and variable length coding is addressed .the sender uses the codebooks alternately , in any order he chooses .the receiver is not aware of the codebook choices of the sender .the different codeword lengths cause a certain asynchronism at the receiver , who should also estimate the boundaries of the codewords and avoid error propagation . a maximal mutual information based universal decoder is proposed to meet these challenges .the main theorem shows that simultaneously for each codebook , the same error exponent can be achieved as the random coding error exponent for this codebook alone provided that it does not exceed the rate of this codebook .the method of types , more exactly the subtypes technique of is used along with second order types ( see for a detailed explanation ) . in a related work problem of transmitting a discrete memoryless source over a dmc with variable - length codes is analyzed . using random coding argument along with maximum likelihood decoder that variable - length source - channel codes achieve an error exponent equal to the random coding exponent of the channel evaluated at the source entropy .a channel coding problem as in this work is not explicitly discussed in . in this paper , to keep the discussion simple , all codeword length ratios are assumed to be in . this assumption could be relaxed replacing by an arbitrary constant , then our main theorem could be used to give an alternative proof of this result of .the advantage of this approach would be the universality of the decoder . due to the length - constraint ,we omit the elaboration of this idea .note that the topic of this paper is also connected to strong asynchronism ( , ) .the notation denotes a quantity growing subexponentially as , that could be given explicitly . for some subexpontial sequences individual notationsare used and the parameters on which these sequences depend will be indicated in parantheses .denote the set by ] , distributions \} ] be given parameters .a codebook library with the above parameters , denoted by , consists of constant composition codebooks such that with , ] . in the sequel , be referred to as length - bound .the transmitter continuously sends messages to the receiver through channel . before sending a message, the transmitter arbitrarily chooses one codebook of the library .this choice is not known to the receiver .the performance of the following decoder is analyzed .the output symbols of the channel are denoted by the infinite sequence .assume that decoding related to symbols is already performed and now the position of is analyzed .in the first stage of decoding the decoder tries to find indices which uniquely maximize if the decoder successfully finds a unique maximizer , the second stage of decoding starts . in this stage , if for all and the maximum of ( [ dekodolomuk ] ) is strictly larger than the decoder decodes as the codeword sent in the positions of and jumps to the position of , there the same but shifted procedure is performed .otherwise the decoder goes to the position of without decoding at the position of .[ stage2 ] . the output of this decoder can be described by a sequence of triplets , where the first coordinate is a codebook index , the second one is a message index related to this codebook , and the third refers to the starting position of the codeword .this sequence is denoted by . are considered.,width=321 ] as mentioned above , the transmitter arbitrarily chooses codebooks .his choices are described by an infinite codebook index sequence where ] .let be the starting position of the message .the average decoding error probability of the message is defined by where the probability is calculated over the random choice of the messages , and the channel transitions .capital letters are used to indicate randomness . in practice ,typically also the sequence is random : the messages at any time instant may be one of different kinds , with equiprobable messages of kind .of course , this scenario is covered by our main theorem , as our bound does not depend on .[ maintheorem ] for each let codebook library parameters as in definition [ constantcomposition ] be given with length - bound and with as . then there exist a sequence with and for each a codebook library with the given parameters such that for all infinite codebook index sequences and index where is the random coding error exponent function , i.e. , it is equal to to avoid additional technical difficulties it is assumed in the calculation below that all three consecutive messages are different ( see the terms below the sum in ( [ packingt2 ] ) and ( [ packingt3 ] ) ) .the minimum in ( [ felsobecsles ] ) is present due to this assumption .the next packing lemma provides the appropriate codebook library for theorem [ maintheorem ] .note that the constructed codebook library works simultaneously for all infinite codebook index sequences .the notations are explained on fig .[ packing ] . , , , , , and .] [ rc - packing - lemma ] let a sequence of codebook library parameters be given as in theorem [ maintheorem ] . then there exist a sequence with and for each a codebook - library with the given parameters such that the following bounds hold : _ t1 : _ for any ] with , and for all joint types \triangleq \sum_{{\genfrac{}{}{0pt}{}{a \in [ n^{k_1 } ] , d \in [ n^{\hat{k}}]}{d \ne a \textnormal { if } k_1=\hat{k}}}}\hspace{-11pt}\mathds 1_{t1,q}^{v_{1}}({\mathbf x}_a^{k_1 } , { \mathbf x}_{d}^{\hat{k } } ) \label{packingt1}\\ & \le \delta^{'}_n \cdot 2^{-n_1\operatorname{i}_{v_1}(x \wedge \hat x)+l^{k_1}r^{k_1}+l^{\hat{k}}r^{\hat{k } } } \notag\end{aligned}\ ] ] here , the indicator function equals if filling the pattern t1 of fig .[ packing ] by and the joint type of the indicated subsequences equals .furthermore , denotes the set of all joint type pairs that may occur in this way ._ t2 : _ for any ] , ] with and for all : \label{packingt2 } \\ & \triangleq \sum_{{\genfrac{}{}{0pt}{}{a \in [ n^{k_1}],b \in [ n^{k_2 } ] , d \in [ n^{\hat{k}}]}{a\ne b \text { if } k_1=k_2}}}\mathds 1_{t2,q}^{v_{1},v_{2}}({\mathbf x}_a^{k_1 } , { \mathbf x}_b^{k_2 } , { \mathbf x}_{d}^{\hat{k } } ) \notag\\ & \le \delta^{'}_n \cdot 2^{-\sum\limits_{i=1}^{2 } n_i\operatorname{i}_{v_i}(x \wedge \hat x)-l^{\hat{k}}f(v_1^{\hat{x}},v_2^{\hat{x}})+\sum\limits_{i=1}^{2}l^{k_i}r^{k_i } + l^{\hat{k}}r^{\hat{k } } } \notag\end{aligned}\ ] ] here , , the indicator function equals if filling the pattern t2 of fig. [ packing ] by , and the joint types of the indicated subsequences equal and respectively .furthermore , denotes the set of all joint type triples that may occur in this way . _t3 : _ for any ] , ] , ] the codewords of are chosen independently and uniformly from .denote the exponents in upper - bounds ( [ packingt1 ] ) , ( [ packingt2 ] ) and ( [ packingt3 ] ) by ] and ] , ] and ] and , let be following event \text { } ( c\ne m_j \text { if } k = h_j \text { and } d = s_j ) , \end{array}\hspace{-3pt}\right\ } \label{errorpattern}\ ] ] and let denote its probability .then for and by standard argument to prove the theorem it is enough to show that ( [ biz1 ] ) holds for all $ ] and ( the number of such pairs is subexponential ) .fix such a pair . without any loss in generalityassume that and that both the and messages affect outputs ( see fig .[ hibabecsleskep ] ) .the analyzes of other cases are similar .assume further that .let .then can be upper - bounded by : , b \in [ n^{h_{j-1}}]}{c \in [ n^{h_{j } } ] } } } \hspace{-28pt } { \textnormal{pr}}\ { \mathcal{e}_j^{\mathbf{h}}(k , d ) | m_{j-2}=a , m_{j-1}=b , m_j = c\ } \label{teljesval}\ ] ] let , , , and ( lemma [ rc - packing - lemma ] will be used with these choices ) and be equal to let be the length of the analyzed output window ( see again fig . [ hibabecsleskep ] ). then ( [ teljesval ] ) can be further upper - bounded by , b \in [ n^{h_{j-1}}]}{c \in [ n^{h_{j } } ] } } } \hspace{-1pt } \prod_{i=1}^{4 } \hspace{-1pt } \big ( 2^{-n_i \operatorname{d}(v_i^{y|x}||w|v_i^{x } ) } \notag\\ & \cdot 2^{-n_i \operatorname{h}_{v_i}(y|x ) } \big ) \cdot \big| \{\mathbf{y } \in \mathcal{y}^{l}:\mathds{1}_{f4}^{\mathbf{v } } \ { { \mathbf x}_a^{k_1},{\mathbf x}_b^{k_2},{\mathbf x}_c^{k_3 } , { \mathbf x}_d^{\hat{k}},{\mathbf y}\ } \notag\\ & = 1 \text { for some } d \in [ n^{\hat{k } } ] \big| , \label{hibabindikator}\end{aligned}\ ] ] where the indicator function equals if filling the pattern of fig .[ hibabecsleskep ] by , , , and the joint types of the indicated subsequences equal , , and respectively .we can upper - bound the set size in ( [ hibabindikator ] ) two different ways .the first bound is .let , .the second bound is : substituting these bounds into ( [ hibabindikator ] ) and using ( [ packingt3 ] ) we get that can be upper - bounded by : the term inside can be bounded from below by by convexity ( [ lassanuccso ] ) can be further lower - bounded by substituting ( [ lassanuccso2 ] ) into ( [ hibabindikator2 ] ) and using ( [ hibatipus ] ) we get hence , choosing , continuity argument and the convexity of the divergence prove if or then in ( [ teljesval ] ) we have to separately investigate the cases when there is coincidence between , and .the minimum in ( [ felsobecsles ] ) is present due to this cases .we would like to thank prof .imre csiszr for his help and advice .we also thank the support of the hungarian national foundation for scientific research grant otka k105840 and the mta - bme stochastics research group .i. csiszr , j. krner , _ information theory , coding theorems for discrete memoryless systems , edition _ , cambridge university press , 2011 .i. csiszr , `` joint source - channel error exponent , '' _ prob .contr . & info .vol . 9 , no . 5 , pp.315323 , 1980 . i. csiszr , `` the method of types , '' _ ieee transactions on information theory _ , vol .2505 - 2523 , 1998 . l. farkas and t. ki , `` random access and source - channel coding error exponents for multiple access channels , ''_ ieee transactions on information theory _ , vol .61 , pp . 3029 - 3040 , jun . 2015 | csiszr s channel coding theorem for multiple codebooks is generalized allowing the codeword lenghts differ across codebooks . it is shown that simultaneously for each codebook an error exponent can be achieved that equals the random coding exponent for this codebook alone , with possible exception of codebooks with small rates . multiple codebook , error exponent , variable length , joint source - channel coding , random access |
quantum mechanics on metric graphs is a subject with a long history which can be traced back to the paper of ruedenberg and scherr on spectra of aromatic carbohydrate molecules elaborating an idea of l. pauling .a new impetus came in the eighties from the need to describe semiconductor graph - type structures , cf . , and the interest to these problems driven both by mathematical curiosity and practical applications is steadily growing ; we refer to or the proceedings for a bibliography to the subject .since quantum graphs are supposed to model various real graph - like structures with the transverse size which is small but non - zero , one has to ask naturally how close are such system to an `` ideal '' graph in the limit of zero thickness .this problem is not easy and a reasonable complete answer is known in case of `` fat graphs '' with neumann boundary conditions and similar systems .a pioneering work in this area was done by freidlin and wentzell and the papers and can be mentioned as important milestones .we managed to contribute to this problem in a series of papers , , and , in which we improved the approximation using the intrinsic geometry of the manifold only , demonstrating the norm resolvent convergence , and finally extending the approximation also to resonances by means of complex scaling .while these results provide in our opinion a solid insight into the neumann - type situation , we must acknowledge as the authors that the three papers are long and rather technical , and some may find them not easy to read .this motivated us to write the present survey in which we intend to describe this family of approximation results without switching in the heavy machinery ; let the reader judge whether we have succeeded . before proceeding let us mention that there is an encouraging recent progress in the more difficult dirichlet case , see ,however , we will not discuss it here .let us briefly describe the contents of the paper . in the next sectionwe describe the two basic objects of this paper , quantum graphs and graph - like manifolds ( cf .figure [ fig : graph - mfd ] ) .section [ sec : disc - spec ] is devoted to convergence of the discrete spectrum summarizing the main results of ref .an extension to non - compact graphs and a resolvent convergence coming from is given in section [ sec : res - conv ] .finally , in section [ sec : reson - conv ] describe the results of ref . showing how the resonances on quantum graphs and graph - like manifolds approximate each other .[ fig : graph - mfd ] ( 0,0 ) with one external edge , five internal edges and four vertices and the associated graph - like manifold , here with cross section manifold .,title="fig : " ] ( 5172,3964)(532,-3415 ) ( 5344,-720 ) ( 4621,-991 ) ( 655,-171 ) ( 5704,-2645) ( 4714,-3231) ( 655,-2419 ) that is a connected metric graph given by where is a usual graph , i.e. , denotes the set of vertices , denotes the set of edges , associates to each edge the pair of its initial and terminal point ( and therefore an orientation ) .the space being a _ metric _ graph means that there is a _ length function _ } ] is a smooth function with the interpolating contribution related to is needed in order to make continuous at each vertex .the following lemma ensures that the error coming from this correction remains small : [ lem : cn ] we have for all and , where denotes the -norm on .due to remark [ rem : collar ] , each component of has a collar neighborhood of length ( in the unscaled coordinates of ) . the cauchy - schwarz inequality and the sobolev trace estimate ( see e.g. ( * ? ? ?8) ) yield ( with in the collar coordinates , and then integration over ) where is considered to be a constant function on and is the -norm on .now is orthogonal to the first ( constant ) eigenfunction of the neumann laplacian on , and the min - max principle ensures that the squared norm of can be estimated by . using the scaling of the metric and , we obtain the desired estimate. we also have to make sure that eigenfunctions of belonging to eigenvalues _ bounded _ with respect to , can not concentrate on the vertex neighborhoods : [ lem : vx ] we have for , where is any edge adjacent to the vertex .the constant in this inequality depends only on , and .we employ the estimate the first summand can be treated as in the previous proof , the second by lemma [ lem : cn ] , and the last one by a sobolev trace estimate on the _ edge _ neighborhood similar to , namely .it remains to show that the conditions are fulfilled .we do not give the details here referring to ( * ? ? ?* sec . 5 ) .the proof of is simple , and it works even with , hence one obtains a stronger estimate , . for the opposite inequality , we need to verify . to this endwe need lemma [ lem : cn ] in the norm and a quadratic form estimate .the norm estimate uses in addition lemma [ lem : vx ] , and the estimate which follows from . for the quadratic form estimate we needthe simple cauchy - schwarz bound .similar results can be obtained for more general situations when the vertex and edge neighborhoods scale at different rates , cf . , and in certain situations also for the dirichlet laplacian , cf . , where , however , the resulting graph operator is decoupled .next we would like to go further and prove also results for non - compact graphs , and also convergence of eigenfunctions or resolvents . to doso we need some more notation .we write and for the -dependent spaces , .we stress that the parameter enters only through the quantity and one can interpret it as a label for the second hilbert space involved see also the appendix in for the concept of a `` distance '' between two hilbert spaces and associated non - negative operators . for brevity, we set [ def : quasi ] we say that an operator is _ -quasi - unitary _ with respect to , iff , , and where is the identity on , and is the operator norm of . in our particular situation , we will employ the quasi - unitary operator where is the lowest normalized eigenfunction on and in turn is the zero function on .the quasi - unitarity is stated in the following lemma : [ lem : j.quasi ] the map defined by ( [ eq : j ] ) is -quasi - unitary , where depends only on , and .a simple calculation shows that , , and that the function is orthogonal to the constant function on , and by the min - max principle we infer that where is the first non - zero ( neumann ) eigenvalue on , and is the derivative with respect to the transverse variable(s ) .the estimate of the sum over the vertex contributions follows from lemma [ lem : vx ] .we also need a tool to compare the laplacians on and .to this end we put : we say that and are _-close _ w.r.t .the map iff where denotes the operator norm of .note that a -quasi - unitary map is indeed unitary .furthermore , if and are -close with respect to a -quasi - unitary map , then and are unitarily equivalent . in this sense, the concept of quasi - unitarity and closeness provides a quantitative way to measure how far a pair of operators is from being unitarily equivalent . in order to show that the operators and are-close , it is often easier to deal with the respective quadratic form domains as we have already done when demonstrating the convergence of the discrete spectrum .we thus want to compare the identification operators on the scale of order with the quasi - unitary map : we say that the identification maps are _-compatible _ with the map iff by means of an adjoint we obtain from a natural map now it is easy to derive the following criterion for -closeness : [ lem : j.comm1 ] assume that and are -compatible w.r.t .the map , and that then and are -close with respect to .we first check that the identification maps and are indeed -compatible with respect to the map : [ lem : j.comp ] the maps and as defined in and are -compatible with , where depends only on , , and . we have by a standard sobolev estimate see , e.g. , estimate or ( * ? ? ?2.4 ) ) we can estimate the latter sum by .for the other identification operator we have using now lemma [ lem : cn ] and reordering the sum , we obtain the additional factor , the maximum degree of a vertex , and the second estimate follows as well .next , we will now indicate briefly how to prove the closeness of the laplacians : [ lem : j.comm ] the laplacians and are -close with respect to the map defined in where depends only on , and .we check the condition of lemma [ lem : j.comm1 ] which reduces to estimating in terms of ; the claim follows from lemma [ lem : cn ] and cauchy - schwarz .putting together the previous results , we come to the following conclusion : [ thm : res ] adopt the uniformity conditions and. then the laplacians and are -close with respect to the quasi - unitary map defined in , i.e. where the error term depends only on , , and .in addition , we have the first estimate follows from lemmata [ lem : j.comm1][lem : j.comm ] ; the second one in turn is a consequence of the first estimate and lemma [ lem : j.quasi ] . one can now develop the standard functional calculus for the laplacians and , and deduce estimates similar to the ones in theorem [ thm : res ] , but with the resolvent replaced by more general functions of the laplacians . specifically , need to be measurable , continuous in a neighborhood of the spectrum of , and the limit at infinity must exist . for example , one can control the heat operators via or the spectral projectors via . a proof of the following resultcan be found in the appendices of the paper , see also : [ thm : res.spec ] under the assumptions of the previous theorem , we have for the spectral projections provided is a compact interval such that . in particular , if contains a single eigenvalue of with multiplicity one corresponding to an eigenfunction , then there is an eigenvalue and an eigenfunction of such that in addition , the spectra converge uniformly on ] .the same result is true if we consider only the essential or the discrete spectral components .naturally , the above stated spectral convergence reduces to the claim of theorem [ thm : disc ] in the situation when the spectra are purely discrete .in the final section we will deal with the convergence of resonances in the present setting .it is useful to include into the considerations also eigenvalues embedded in the continuous spectrum , because it may happen that resonances of a `` fat graph '' converge to such an eigenvalue , as it can be seen , e.g. , in a simple motivating example of the metric graph consisting of a single loop with a half - line `` lead '' attached .a standard and successful method of dealing with resonances is based on the concept of _ complex scaling _, often an _ exterior _ one .the method has its roots in the seminal papers and a lot of work was devoted to it ; we refer to for a sample bibliography .the main virtue is that it allows to reformulate treatment of resonances , i.e. poles of the analytically continued resolvent , and embedded eigenvalues , to analysis of discrete eigenvalues of a suitable _ non - selfadjoint _ operator . as we will see below the complex - scaling approach suits perfectly , in particular , to our convergence analysis . in this section, we assume that the metric graph is finite , but non - compact , i.e. which means , in particular , that the assumptions and are satisfied .we decompose the metric graph and the graph - like manifold into an _ interior _ and _ exterior _ part and , respectively . for technical reasons ,it is easier to do the cut not at the initial vertices of an external edge , but at a fixed distance , say one , from along , and similarly for the graph - like manifold .we therefore consider the _ internal _ metric graph consisting of all vertices , all edges of finite length and the edge parts for each external edge .the exterior metric graph is just the disjoint union of -many copies of a half - line , and we use the corresponding parametrization on an external edge . in other words , we do not regard the _ boundary points _ as vertices . similarly , let be the common boundary of and ; note that is isometric to -many copies of .now we introduce the exterior dilation operator .for we define by one - parameter unitary groups on and , respectively , acting non - trivially on the _ external _ part only .we call the operator for the _ dilated _ laplacian on with the domain for _ real _ .a simple calculation shows that holds for internal edges and that is true for external edges with the domain given by here , are the functions from restricted to , i.e. without any condition at .in addition , is defined as the longitudinal derivative on the common boundary oriented _ away _ from the internal part .the expression of now can be generalized to _ complex _ in the strip where ; we call the corresponding the _ complex dilated _ laplacian .the operators form a family with spectrum contained in the common sector moreover , we can determine the essential spectrum coming from the external part .note that the branches of the essential spectrum associated with the higher transverse eigenvalues , on the graph - like manifold all vanish as .[ lem : ess.sp ] let the metric graph be finite and non - compact , i.e. is fulfilled .then in particular , since , we have for any bounded set provided is small enough .in addition , we have the following important result : [ thm : analytic ] for not contained in the -sector , the resolvents depend analytically on .this is a highly non - trivial fact since is neither of type a nor of type b , in other words , both the sesquilinear form and the operator domains depend on even for real . to put it differently, the ( non - smooth ) exterior scaling as defined here is a very singular perturbation of the operator .the main idea is to compare with the resolvent of a _ decoupled _ operator , where one imposes dirichlet boundary conditions at .an inspiration for such an idea was used in ; for a full proof in our situation we refer to . as a consequence, we have the following result on the discrete spectrum . see ( * ? ? ?5.8 ) for more details . ][ lem : disc ] the discrete spectrum of is locally constant in . as a consequence ,discrete ( complex ) eigenvalues are `` revealed '' if is positive and large enough .the same is true for eigenvalues embedded into the continuous spectrum of the laplacian ; in this case it is sufficient to have .this is crucial for the above mentioned reformulation .recall that by the most common definition a _ resonance _ is a pole in a meromorphic continuation of the resolvent over the cut corresponding to the essential spectrum into the `` unphysical sheet '' of the riemann energy surface .rotating the essential spectrum one can reveal these singularities ; this allows us to identify a _ resonance _ of with a complex -eigenvalue of the dilated operator for large enough .notice also that such a definition is consistent : it does not depend on where we cut the spaces into an interior and exterior part ; of course , as far as the interior part remains compact . in order to demonstrate convergence properties of resonances ,we need to introduce a scale of hilbert spaces associated to the non - selfadjoint operator .in particular , we set with the norm and with the dual norm ; for details we refer to ( * ? ? ?we say that an operator is _ -quasi - unitary _ with respect to , iff , , and where and is the operator norm of .it is much easier to use the -quasi - unitarity with respect to the non - dilated operator ( via its quadratic form ) as in definition [ def : quasi ] . in particular, we would like to compare the non - dilated scale of order and the dilated scale of order : and are _ compatible _ if there is a family of bounded , invertible operators on such that , and .given , we say that and are _ uniformly _ compatible with respect to , if there is a constant , _ independent _ of , such that in the situation we consider there is a natural candidate for , namely in contrast to , the operators are defined also for _ complex _ values of . again , as for the analyticity , the proof of uniform compatibility in our example needs some technical preliminaries which we skip here .in essence , one needs to define the resolvent as a bounded operator from to with -independent bound , where is an appropriate space of order for , not necessarily related to .[ lem : comp.dil ] for a given , the operators and its complex dilated counterparts are ( uniformly ) compatible . in particular , the map as defined in is -quasi - unitary with respect to , where depends on . the compatibility is demonstrated in ( * ? ? ?the second assertion follows from where we employed the fact that and lemma [ lem : j.quasi ] .now we are in position to state our first convergence result of this section : [ thm : res.dil ] assume that the metric graph is finite and non - compact , cf . .then the complex dilated laplacians and are -close , i.e. where depends on .the proof is essentially the same as for the non - dilated case .first we define operators on the scales of order one , specifically in exactly the same way as in and .recall that we do not consider the boundary points between the internal and external parts as vertices , i.e. the graph - like manifold does not have a vertex neighborhood there . hence and then we have r_0^\theta\ ] ] where .the last difference at the right - hand side can be estimated by using lemma [ lem : j.comp ] and lemma [ lem : comp.dil ] , and the first one can be treated similarly . for the remaining termwe observe that in order to prove it suffices to show for and .from the compatibility , we obtain and similarly for ; in particular , we can choose , however , the estimate is almost the same as in the non - dilated case given in lemma [ lem : j.comm ] . as in the non - dilated case , on can develop a functional calculus for the pairs of operators and cf .since now the operators are not self - ajoint , we only have a _ holomorphic _ functional calculus .in particular , we can show for the spectral projections , provided is an open disc containing a single discrete eigenvalue of . from here our main result on resonances follows : [ thm : resonances ] assume that the metric graph is finite and non - compact , cf . .if is a resonance of the laplacian with a multiplicity , then for a sufficiently small there exist resonances of , satisfying and not necessarily mutually different , which all converge to as .the same is true in the case when is an embedded eigenvalue of , except that then only holds in general .finally , if the multiplicity of is one with a normalized eigenfunction ( corresponding to a _ resonance _ or _ embedded eigenvalue _ for ) , then there exists a normalized eigenfunction ( related to the respective entity for ) on the graph - like manifold ) such that first author acknowledges a partial support by gaas and meys of the czech republic under projects a100480501 and lc06002 , the second one by dfg under the grant po-1034/1 - 1 . , _ asymptotics of spectra of neumann laplacians in thin domains _ , advances in differential equations and mathematical physics ( birmingham , al , 2002 ) , contemp . math . , vol .327 , amer .soc . , providence , ri , 2003 , pp . | quantum networks are often modelled using schrdinger operators on metric graphs . to give meaning to such models one has to know how to interpret the boundary conditions which match the wave functions at the graph vertices . in this article we give a survey , technically not too heavy , of several recent results which serve this purpose . specifically , we consider approximations by means of `` fat graphs '' in other words , suitable families of shrinking manifolds and discuss convergence of the spectra and resonances in such a setting . |
a reduced model for slow variables of multiscale dynamics is a lower - dimensional dynamical system , which `` resolves '' ( that is , qualitatively approximates in some appropriate sense ) major large scale slow variables of the underlying higher - dimensional multiscale dynamics while at the same time being relatively simple and computationally inexpensive to work with .this is important in real - world applications of contemporary science , such as geophysical science and climate change prediction , where the actual underlying physical process is impossible to model directly , and its reduced approximation has to be designed for such a purpose .reduced dynamics were used to model global circulation patterns , and large - scale features of tropical convection . typically , reduced models of multiscale dynamics consist of simplified lower - dimensional dynamics of the original multiscale dynamics for the resolved variables , with additional terms and parameters which serve as replacements to the missing coupling terms with the unresolved variables of the underlying physical process . these extra parameters in the reduced model are usually computed to match a particular dynamical regime of the underlying multiscale dynamics .in particular , if the underlying multiscale process changes its dynamical regime ( for example , in response to changes in its own forcing parameters ) , then the parameters of the corresponding reduced model have to be appropriately readjusted to match its dynamical regime to the new regime of the multiscale dynamics . in some real - world applications , such as the climate change prediction ,the reduced models of complex multiscale climate dynamics are used to predict the response of the actual multiscale climate to changes in various global atmospheric and oceanic parameters . however , while a reduced model may be manually adjusted to match a particular dynamical regime of a multiscale process , it is unclear whether it should respond to identical external perturbations _ a priori _ in the same way as the multiscale process , without any extra readjustments . how do reduced models of multiscale dynamics , adjusted to a particular dynamical regime , respond to external perturbations which force them out of this regime ?is their response similar to the response of the underlying multiscale dynamics to the same external perturbation ?it is quite clear that the reduced dynamics evolve on a set with lower dimension than that of the full multiscale dynamics .how do the properties of this limiting set respond to changing external forcing parameter , in comparison to the full multiscale attractor ? herewe develop a set of criteria for similarity of the response to small external perturbations between slow variables of multiscale dynamics and those of a reduced model for slow variables only , determined through statistical properties of the unperturbed dynamics .we also carry out a computational study of the difference in responses of the full multiscale and deterministic reduced dynamics of the linearly coupled rescaled lorenz 96 model from to identical external perturbations .we compare and contrast both the actual ( `` ideal '' ) responses of the multiscale and reduced models directly to finitely small perturbations of external forcing , and the linear response predictions of the reduced models via the fluctuation - dissipation theorem .two different types of forcing perturbations are used : the time - independent heaviside forcing , and the simple time - dependent ramp forcing .the manuscript is structured as follows . in section [ sec : averaged ] we formulate the standard averaging formalism to obtain the averaged slow dynamics from a general two - scale dynamical system .section [ sec : response ] describes statistically tractable criteria to ensure similarity of responses between a two - scale system and its averaged slow dynamics . in section [ sec : implement ] we describe the first - order reduced model approximation to a two - scale dynamics with linear coupling between the slow and fast variables , previously developed in . in section [ sec : lorenz96 ] we introduce the two - scale lorenz 96 toy model which will be our testbed for this method . in section [ sec : results ]we present comparisons of the large features of the multiscale and reduced systems , including statistical comparisons as well as the ability of the reduced model to capture perturbation response of the multiscale system .section [ sec : summary ] summarizes the results and suggests future work .a general two - scale dynamical system with slow variables and fast variables is usually represented as where , are the slow variables of the system , are the fast variables , and and are nonlinear differentiable functions .the integer parameters are the dimensions of the slow and fast variable subspaces , respectively .usually , a time - scale separation parameter is used to denote the difference in time scales between the slow and fast variables , however , here we omit it , as the framework for reduced models from , which we use here , does not require such a parameter to be explicitly present . under the assumption of `` infinitely fast '' -variables , one can use the averaging formalism to write the averaged system for slow variables alone : where is the invariant distribution measure of the fast limiting system with above in being a constant parameter .we express the slow solutions of the two - scale system in and the averaged system in in terms of differentiable flows : it can be shown ( see and references therein ) that if the time scale separation between and is large enough , then , for the identical initial conditions and generic choice of , the solution of the averaged system in remains near the solution of the original two - scale system in for finitely long time .let and denote the invariant distribution measures for the two - scale system in and the averaged system in , respectively . also , let be a differentiable test function .then , the statistically average values of for both two - scale and averaged systems are given via [ eq : h ] now , consider the two - scale system in , and the averaged system in , perturbed at the slow variables by a small time - dependent forcing : [ perturbed - eqn ] then , for small enough , the average responses and for the two - scale system in and the averaged system in , respectively , can be approximated by the following linear response relations : for details , see . above , it is clear that any differences between and are due to differences between and , since is identical in both cases .the differences between and are , in turn , caused by the differences between the flows and , and the differences between the invariant distribution measures and , which are difficult to quantify in practice . in what followswe express the differences between and via statistically tractable quantities .first , we assume that the invariant measures and are absolutely continuous with respect to the lebesgue measure , with distribution densities and , respectively : while it is known that purely deterministic processes may not have lebesgue - continuity of their invariant measures , however , even small amounts of random noise , which is always present in real - world complex geophysical dynamics , usually ensure the existence of the distribution density .integration by parts yields [ eq : r ] at this point , let us express as the product of its marginal distribution , defined as and conditional distribution , given by it is easy to check that the conditional distribution satisfies the identity now , the formula for the linear response operator above can be written as we now denote where is small compared to either or for relevant values of , and .then , for the second integral in the right - hand side of we write where the first integral in the right - hand side is zero due to the condition in . neglecting the terms in , we write at this point, we express and as exponentials where and are smooth functions , growing to infinity as becomes infinite .the latter yields replacing invariant measure averages with long - term time averages yields the following time correlation functions : [ eq : r2 ] taking into account the arbitrariness of , we conclude that , in order for to approximate despite the fact that , for long times , diverges from even for identical initial conditions , we generally need three conditions to be approximately satisfied : 1 . for identical initial conditions, should approximate ( that is , in should indeed be small ) on the finite time scale of decay of the correlation functions in ; 2 . should approximate , which means that the invariant distribution of the averaged system in should be similar to the -marginal of the invariant distribution of the two - scale dynamical system in ; 3 . the time autocorrelation functions of the averaged system in should be similar to the time autocorrelation functions of the slow variables of the two - scale system in . as a side note , observe that the nature of dependence of the conditional distribution on does not play any role in the criteria for the similarity of responses .in particular , the exact factorization of into its - and -marginals ( which means that is independent of ) is not required , unlike what was suggested in for the gaussian invariant states .as formulated above in sections [ sec : averaged ] and [ sec : response ] , the criteria of the response similarity are applicable for a broad range of dynamical systems with general forms of coupling and their averaged slow dynamics . however , the practical computation of the reduced model approximation to averaged slow dynamics depends on the form of coupling in the two - scale system . in this work , we consider the linear coupling between the slow and fast variables in the two - scale system .the linear coupling is the most basic form of coupling in physical processes , however , because of that it is also probably the most common form of coupling . for the linear coupling, the reduced model is constructed according to the method developed previously in , which we briefly sketch below .we consider the special setting of with linear coupling between and : where and are nonlinear differentiable functions , and and are constant matrices of appropriate sizes . the corresponding averaged dynamics for slow variables from simplifies to where is the statistical mean state of the fast limiting system with treated as constant parameter . in general , the exact dependence of on is unknown , except for a few special cases like the ornstein - uhlenbeck process . here , like in , we approximate via the linear expansion where is the statistical average state of the full multiscale system in , and . the constant matrix is computed as the time integral of the correlation function where is the solution of for .the above formula constitutes the quasi - gaussian approximation to the linear response of to small constant forcing perturbations in , and is a good approximation when the dynamics in are strongly chaotic and rapidly mixing . with, the reduced system in becomes the explicitly defined deterministic reduced model for slow variables alone : where . in what follows , the `` zero - order '' model refers to with the last term set to zero ( such that the coupling is parameterized only by the constant term ) . for details , see and references therein .in the current work , we test the response of the reduced model for slow variables on the rescaled lorenz 96 system with linear coupling , which is obtained from the original two - scale lorenz 96 system by appropriately rescaling dynamical variables to approximately set their mean states and variances to zero and one , respectively .below we present a brief exposition of how the rescaled lorenz 96 model is derived . the original two - scale lorenz 96 system is given by , \end{aligned}\right.\ ] ] where and periodic boundary conditions , and here and are constant forcing terms , and constant coupling parameters , and is the time scale separation parameter . throughout this paperwe will consider systems with twenty slow variables and eighty fast variables . in lorenz s original formulation studying predictability in atmospheric - type systems , he begins with the uncoupled system with periodic boundary conditions .this system has generic features of geophysical flows , namely a nonlinear advection - like term , linearly unstable waves , damping , forcing , mixing , and chaos .the simple formulation , with invariance under index translation and a uniform forcing term , allows for straightforward analysis - in particular the long - time statistics of each variable should be identical and will only depend on .additionally , the chaos and mixing of the system are simply regulated by the forcing , with decaying solutions for near zero , periodic solutions for slightly larger , weakly chaotic quasi - periodic solutions around , and chaotic and strongly mixing systems around and higher .lorenz s two - time coupled system was introduced to study predictability and lyapunov exponents of systems with subgrid phenomena on faster timescales , and one of the authors of the current work has recent results showing that coupling two chaotic systems can suppress chaos in the slower system . to simplify the analysis of coupling trends for the two - time system, we will scale out the dependence of the mean state and mean energy on the forcing term .due to the translational invariance , the long - term mean and standard deviation for the uncoupled system are the same for all .so we rescale and as where the new variables have zero mean and unit standard deviation , while their time autocorrelation functions have normalized scaling across different dynamical regimes ( that is , different forcings ) for short correlation times .this rescaling was previously used in . in the rescaled variables ,the uncoupled lorenz model becomes where and are functions of .we similarly rescale the coupled two - scale lorenz 96 model : where and are the long term means and standard deviations of the uncoupled systems with or as constant forcing , respectively .it is this rescaled coupled lorenz 96 system that we focus on for the closure approximation . before any numerical tests, one can already anticipate that the zero - order reduced system will be inadequate for this model even with such simple coupling .once the reference state is determined and computed , the zero - order reduced system is given , according to , by this is equivalent to perturbing by , which we expect to be small since and have zero mean in the uncoupled setting . in particular , we expect this perturbation to have only a small effect on the dynamics .however , in the multiscale dynamics it has been shown that a chaotic regime in the fast system can suppress chaos when coupled to the slow system , and this phenomenon is completely lost in the zero - order model .here we compare numerical results of the rescaled two - scale lorenz 96 system with its corresponding reduced system .in particular we look at the ability of the reduced system captures some statistical quantities and how well it captures mean response to perturbations in the slow variables . in all parameter regimesconsidered , we have a slow system consisting of twenty variables coupled with a fast system of eighty variables .we use a fourth order runge - kutta method with timestep in the multiscale system and in the reduced system . to compute the mean response , an ensemble of pointsis sampled from a single trajectory which has been allowed to settle onto the attractor .using the translational symmetry of the lorenz 96 system , we rotate the indices to generate an ensemble twenty times larger . on a modern laptop , the initial calculation to generate the reduced system for the lorenz 96 system takes only a few minutes ; once computed , numerical simulation of the reduced system is faster than the multiscale system by a factor on the order of .computing the mean response for a single forcing for 5 time units with a sufficiently large ensemble size ( trajectories ) takes over an hour in the multiscale system with but less than three minutes for the corresponding reduced system . in section [ sec : response ] we outlined the main requirements for correctly capturing the response of the two - scale system by its reduced model .those were the approximation of joint distribution density functions ( ddf ) for slow variables , and the time autocorrelation functions of the time series .it is , of course , not computationally feasible to directly compare the 20-dimensional ddfs and time autocorrelations for all possible test functions .however , it is possible to compare the one - dimensional marginal ddfs and simple time autocorrelations for individual slow variables , to have a rough estimate on how the statistical properties of the multiscale dynamics are reproduced by the reduced model . in figure [ ddfs ]we compare the distribution density functions and autocorrelation functions of the slow variables .the ddfs are computed using bin - counting , and the autocorrelation function , averaged over , is normalized by the variance .results from three parameter regimes are presented , and in all three regimes the fast system is chaotic and weakly mixing and the coupling strength is chosen to be large enough so that the multiscale dynamics are challenging to approximate .of particular interest are timescale separations of and .first we consider a chaotic and strongly mixing slow regime .figures are presented for the timescale separation only , because in this regime the picture is very similar for .we also consider a weakly chaotic and quasi - periodic slow regime . in this regime , the coupled dynamics are more dependent on the timescale separation so we present results for both and . statistical quantities of other regimes , including regimes with more periodic behavior , have been presented in . + [ cols="^,^ " , ] we observe that the first - order reduced system ideal response is a much closer approximation to the multiscale ideal response than the corresponding zero - order ideal response . in these regimesthe relative error of the first - order response is limited to about for the heaviside forcing and less for the ramp forcing , while in the zero - order system the error is around for the step forcing and for ramp forcing at time .remark that in the third plot for ramp forcing response in figure [ fdt_ideal_ramp_error ] there is a small bump in the relative error shortly after the onset of forcing .this plot corresponds to a weakly chaotic regime ( ) with a large timescale separation ( ) in the multiscale system . in this regimethe small nonlinear fluctuations of the multiscale system are relatively large compared to the ramp forcing for near zero , so the relative error of the reduced system responses is large . above in section [ meanresponse_subsection ]we discussed the actual responses of the statistical mean states of both the two - scale and reduced models to small heaviside and ramp forcings .for completeness of the study , we also attempt to predict the response of the mean state of the two - scale system via the quasi - gaussian linear response approximation of the reduced system . in the quasi - gaussian response approximation ,the terms and in are replaced with the gaussian approximations with same mean state and covariance matrices as in the actual dynamics .this , and the fact that in yields the following formula for the linear response approximation of the mean state response : [ fdt - resp ] where and are the mean state and covariance matrix of the corresponding unperturbed systems ( two - scale and reduced ) , computed as for large multiscale problems the mean response may be difficult to compute directly , since the large ensemble size needed for an accurate average is compounded by an already large number of variables and small timestep discretization . in the case where the mean response of the slow variables is desired , one might prefer to compute the fdt response ( [ fdt - resp ] ) using a time series from the reduced system for a `` quick and dirty '' approximation to the ideal response operator for the multiscale slow system .we show the accuracy of this fdt response approximation for the lorenz 96 system , using a quasi - gaussian approximation with time series data from the zero- and first - order reduced systems .the quasi - gaussian response snapshots for the response times and are shown in figures [ fdt_ideal ] and [ fdt_ideal_ramp ] for the heaviside and ramp forcing , respectively .qualitatively , the quasi - gaussian response does capture the large features of the actual response , although most noticeable in these snapshots is the large exaggeration of the quasi - gaussian response calculated from the first - order reduced system , which predicts a much larger off - diagonal response than what is observed .the possible reason for that is that the distribution densities of the first - order reduced model are more strongly non - gaussian than those of the zero - order reduced model , while the time autocorrelation functions are more weakly decaying ( see figure [ ddfs ] ) .it was observed previously in that in these conditions the quasi - gaussian linear response approximation tends to overshoot the off - diagonal response by a large margin . in other words ,the better precision of the quasi - gaussian linear response of the zero - order model is the result of mutual cancellation of the two errors : first one is the error in the distribution density of the zero - order reduced model ( significantly more gaussian than in in the multiscale dynamics ) , while the second one is the error in the quasi - gaussian linear response due to non - gaussianity of the statistical state ( less in the case of the zero - order model ) .the relative error and cosine similarity are measured against the multiscale ideal response and can be seen for heaviside step forcing in figure [ fdt_ideal_error ] and for ramp forcing in figure [ fdt_ideal_ramp_error ] .the ideal response of the first - order reduced system is clearly the best of the four responses at capturing response in the slow variables .it is interesting , but perhaps not too surprising , that the least accurate estimate is given by the fdt response of the first - order reduced system .this should be expected since the quasi - gaussian approximation is only valid for well - mixing systems whose distribution densities are close to gaussian , which in particular is the case for the uncoupled lorenz 96 systems in a chaotic regime .however , such a system exhibits suppressed chaos when coupled to another chaotic systems , and the resulting distribution density will be far from gaussian .since the first - order reduced system matches more closely the multiscale system , and the zero - order system will behave as an uncoupled lorenz 96 system , the first - order system will be less chaotic and will be a poor candidate for the quasi - gaussian fdt response .in fact , for non - chaotic regimes , as in the case of where spatially periodic solutions emerge in the two - scale and first - order reduced systems , the long - time covariance matrix will be singular , so the quasi - gaussian response as presented will not be applicable. for further reading on blended fdt responses which might be more effective in these cases , see .in this work we studied the response to small external perturbations of multiscale dynamics and their reduced models for slow variables only . we elucidated a set of criteria for statistical properties of the multiscale and reduced systems which facilitated similarity of responses of both systems to small external perturbations .it was shown that the similarity of marginal distribution densities of slow variables and their time autocorrelation functions controlled the similarity of responses to small external perturbations of both systems . like in , here we demonstrated that including a first - order correction term to a standard closure approximation for a nonlinear chaotic two - time system offered distinct improvements over the zero - order closure in capturing large - scale features of the slow dynamics .in particular , this reduced system was able to accurately capture the distribution density of solutions as well as the mean state response of the system to simple forcing perturbations .this correction term was relatively easy to generate , requiring only simple statistical calculations of the uncoupled fast system for an appropriate set of fixed parameters , and the resulting reduced system required much less computational resources than the corresponding multiscale system .focusing on the mean state linear response of the slow variables , we showed that forcing perturbations in the reduced systems have similar responses as in the two - time system .furthermore , we showed that using the unperturbed dynamics of the reduced systems for linear response prediction is also possible .however , in the parameter regimes we present here the first - order reduced systems are not rapidly mixing and do not follow a gaussian distribution , but the zero - order reduced systems do have these properties , so this fluctuation - dissipation response is effective only using the zero - order system . a linear response method which takes into account the non - gaussianity of the invariant statistical state ( such as the blended response algorithm , based on the tangent map linear response approximation )is apparently needed to capture the response for strongly non - gaussian dynamical regimes in reduced models . herethe linear response closure derivation and numerical results have been presented only for the special case of linear coupling between slow and fast systems , but this derivation has been extended to systems with nonlinear and multiplicative coupling . in future workwe hope to extend similar results to these more general systems and to test the robustness of this method for application to a large variety of problems. * acknowledgment .* rafail abramov was supported by the national science foundation career grant dms-0845760 , and the office of naval research grants n00014 - 09 - 0083 and 25 - 74200-f6607 .marc kjerland was supported as a research assistant through the national science foundation career grant dms-0845760 .g. papanicolaou .introduction to the asymptotic analysis of stochastic equations . in r.diprima , editor , _ modern modeling of continuum phenomena _ , volume 16 of _ lectures in applied mathematics_. american mathematical society , 1977 . | in real - world geophysical applications ( such as predicting the climate change ) , the reduced models of real - world complex multiscale dynamics are used to predict the response of the actual multiscale climate to changes in various global atmospheric and oceanic parameters . however , while a reduced model may be adjusted to match a particular dynamical regime of a multiscale process , it is unclear why it should respond to external perturbations in the same way as the underlying multiscale process itself . in the current work , the authors study the statistical behavior of a reduced model of the linearly coupled multiscale lorenz 96 system in the vicinity of a chosen dynamical regime by perturbing the reduced model via a set of forcing parameters and observing the response of the reduced model to these external perturbations . comparisons are made to the response of the underlying multiscale dynamics to the same set of perturbations . additionally , practical viability of linear response approximation via the fluctuation - dissipation theorem is studied for the reduced model . multiscale dynamics ; reduced models ; response to external forcing * ams subject classifications . * 37 m , 37n |
we consider a multiple testing scenario encountered in many current applications of statistics .given a large index set and a family of null hypotheses about the distribution of a high - dimensional random vector , we wish to design a procedure , basically a family of test statistics and thresholds , to estimate the subset over which the null hypotheses are false .we shall refer to as the `` active set '' and write for our estimator of based on a random sample of size from .the hypotheses in ( namely the ones for which the null is rejected ) are referred to as `` detections '' or `` discoveries . ''naturally , the goal is to maximize the number of detected true positives while simultaneously controlling the number of false discoveries . there are two widely used criteria for controlling false positives : + * fwer : * assume that is defined on the probability space . the family - wise error rate ( fwer ) is which is the probability of making at least one false discovery .this is usually controlled using bonferroni bounds and their refinements , or using resampling methods or random permutation . + * fdr : * the false discovery rate ( fdr ) is the expected ratio between the number of false alarms and the number of discoveries .+ in many cases , including the settings in computational biology which directly motivate this work , we find , as well as small `` effect sizes . ''this is the case , for example , in genome - wide association studies ( gwas ) where and the dependence of the `` phenotype '' on the `` genotype '' is often assumed to be linear ; the active set are those with non - zero coefficients and effect size refers to the fraction of the total variance of explained by a particular . under these challenging circumstances , the fwer criterion is usually very conservative and power is limited ; that is , number of true positive detections is often very small ( if not null ) compared to ( the `` missing heritability '' ) .this is why the less conservative fdr criterion is sometimes preferred : it allows for a higher number of true detections , but of course at the expense of false positives . however , there are situations , such as gwas , in which this tradeoff is unacceptable ; for example , collecting more data and doing follow - up experiments may be too labor intensive or expensive , and therefore having even one false discovery may be deemed undesirable . to set the stage for our proposal , suppose we are given a family of test statistics and can assume that deviations from the null are captured by small values of ( e.g. , p - values ) .we make the usual assumption , easily achieved in practice , that the distribution of does not depend on when , and individual rejection regions are of the form for a constant independent of . defining , the bonferroni upper - bound is to ensure that , is selected such that whenever .the bonferroni bound can only be marginally improved ( see , in particular estimator , which will be referred to as bonferroni - holm in the rest of the paper ) in the general case . while alternative procedures ( including permutation tests ) can be designed to take advantage of correlations among tests ,the bound is sharp when and tests are independent .* coarse - to - fine testing : * clearly some additional assumptions or domain - specific knowledge is necessary to ameliorate the reduction in power resulting from controlling the fwer . motivated by applications in genomics , we suppose the set has a natural hierarchical structure . in principle , it should then be possible to gain power if the active hypotheses are not randomly distributed throughout but rather have a tendency to cluster within cells of the hierarchy .in fact , we shall consider the simplest example consisting of only two levels corresponding to individual hypotheses indexed by and a partition of into non - overlapping subsets , which we call `` cells . ''we will propose a particular multiple testing strategy which is coarse - to - fine with respect to this structure , controls the fwer , and whose power will exceed that of the standard bonferroni - holm approach for typical models and realistic parameters when a minimal degree of clustering is present .it is important to note that clustering property is not a condition for a correct control of the fwer at a given level using our coarse - to - fine procedure , but only for its increased efficiency in discovering active hypotheses .our estimate of is now based on two families of test statistics : , as above , and .the cell - level test is designed to assume small values only when is `` active , '' meaning that .our estimator of is now one theoretical challenge of this method is to derive a tractable method for controlling the fwer at a given level .evidently , this method can only out - perform bonferroni if ; otherwise , the coarse - to - fine active set is a subset of the bonferroni discoveries .a key parameter is , an upper bound on the number of active cells , and in the next section we will derive an fwer bound under an appropriate compound null hypothesis . the main results of the paper are in the ensuing analysis for different models for . in each case , the first objective is to compute for a given and and the second objective is to maximize the power over all pairs which satisfy .the smaller our upper bound on , the stronger is the clustering of active hypotheses in cells and the greater is the gain in power compared with the bonferroni bound . in particular , as soon as , the coarse - to - fine strategy will lead to a considerably less conservative score threshold for individual hypotheses relative to the bonferroni estimate and the coarse - to - fine procedure will yield an increase in power for a given fwer .again , our assumptions about clustering are only expressed through an upper bound on ; no other assumptions about the distribution of are made and the fwer is controlled in all cases .the main technical difficulty arises from the correlation between the corresponding test statistics .this must be taken into account since it increases the likelihood of an individual index being falsely declared active when the cell that contains it is falsely discovered ( survivorship bias ) .more specifically , we require sharp estimates of quadrant probabilities under the _ joint distribution _ of and when , the cell containing , is inactive .all these issues will be analyzed in two cases .first , we will consider the standard linear model with gaussian data . in this case is expressed in terms of centered chi - square distributions and the power is expressed in terms of non - centered chi - square distributions .the efficiency of the coarse - to - fine method in detecting active hypotheses will depend on effect sizes , both at the level of cells and individual , among other factors .a non - parametric procedure will then be developed in section [ sec:5 ] based on generalized permutation testing and invariance assumptions .finally , we shall derive a high - confidence upper bound on based on a martingale argument .extensive simulations comparing the power of the coarse - to - fine and bonferroni - holm appear throughout .* applications and related work : * as indicated above , our work ( and some of our notation ) is inspired by statistical issues arising in gwas and related areas in computational genomics . in the most common version of gwas , the `` genotype '' of an individualis represented by the genetic states at a very large family of genomic locations ; these variations are called single nucleotide polymorphisms or snps . in any given studythe objective is to find those snps `` associated '' with a given `` phenotype '' , for example a measurable trait such as height or blood pressure .the null hypothesis for snp is that and are independent r.v.s , and whereas may run into the millions , the set of active variants is expected to be fewer than one hundred .( ideally , one seeks the `` causal '' variants , an even smaller set , but separating correlation and causality is notoriously difficult . )control of the fwer is the gold standard and the linear model is common .if the considered variants are confined to coding regions , then the set of genes provides a natural partition of ( and the fact that genes are organized into pathways provides a natural three - level hierarchy ) another application of large - scale multiple testing is variable filtering in high - dimensional prediction : the objective is to predict a categorical or continuous variable based on a family of potentially discriminating features . learning a predictor from i.i.d .samples of is often facilitated by limiting _ a priori _ the set of features utilized in training to a subset determined by testing the features one - by - one for dependence on and setting a signficance threshold . in most applications of machine learning to artificial perception , no premium is placed on pruning to a highly distinguished subset ; indeed , the particular set of selected features is rarely examined or considered of significance .in contrast , the identities of the particular features selected and appearing in decision rules are often of keen interest in computational genomics , e.g. , discovering cancer biomarkers , where the variables represent `` omics '' data ( e.g. , gene expression ) , and codes for two possible cellular or disease phenotypes .obtaining a `` signature '' devoid of false positives can be beneficial in understanding the underlying biology and interpreting the decision rules . in this case the gene ontology ( go ) provides a very rich hierarchical structure , but one example being the organization of genes in pathways .indeed , building predictors to separate `` driver mutations '' from `` passenger mutations '' in cancer would appear to be a promising candidate for coarse - to - fine testing due to the fact that drivers are known to cluster in pathways .there is a literature on coarse - to - fine pattern recognition ( see , e.g. , and the references therein ) , but the emphasis has traditionally been on computational efficiency rather than error control .computation is not considered here . moreover , in most of this work , especially applications to vision and speech , the emphasis is on detecting true positives ( e.g. , patterns of interest such as faces ) at the expense of false positives .simply `` reversing '' the role of true positives and negatives is not feasible due to the loss of reasonable invariance assumptions ; in effect , every pattern of interest is unique .finally , in , a hierarchical testing approach is used in the context of the fwer .however , the intention is to improve the power of detection relative to the bonferroni - holm methods only at level of clusters of hypotheses ; in contrast to our method , the two approaches have comparable power at the level of individual hypotheses .* organization of the paper : * the paper is structured as follows : in section [ sec:2 ] we present a bonferroni - based inequality that will be central for controlling the fwer using the coarse - to - fine method in different models . in section [ sec:3 ] will consider a parametric model that will illustrate precisely the way we control the fwer at a fixed level and permit a power comparison between coarse - to - fine and bonferroni - holm .we then propose a non - parametric procedure in section [ sec:5 ] under general invariance assumptions .a method for estimating an upper bound on the number of active cells and incorporating it into the testing procedure without violating the fwer constraint is derived in section [ sec:6 ] .finally , some concluding remarks are made in the discussion .the finite family of null hypotheses will be denoted by , where is either true or false .we are interested in the active set of indices , and will write for the set of inactive indices. suppose our data takes values in .the set is commonly designed based on individual rejection regions , with as indicated in the previous section , in the conservative bonferroni approach , the is controlled at level by assuming .if the rejection regions are designed so that this probability is independent of whenever , then the condition boils down to for .generally , for a constant for some family of test statistics .while there is not much to do in the general case to improve on the bonferroni method , it is possible to improve power if is structured and one has prior knowledge about way the active hypotheses are organized relative to this structure . in this paper, we consider a coarse - to - fine framework in which is provided with a partition , so that , where the subsets ( which we will call cells ) are non - overlapping . for , we let denote the unique cell that contains it .the `` coarse '' step selects cells likely to contain active indices , followed by a `` fine '' step in which a bonferroni or equivalent procedure is applied only to hypotheses included in the selected cells .more explicitly , we will associate a rejection region to each and consider the discovery set we will say that a cell is active if and only if , which we shall also express as , implicitly defining as the logical and " of all . we will also consider the double null hypothesis of belonging in an inactive cell ( which obviously implies that is inactive too ) , and we will let be the set of such s .let denote the size of the largest cell in and be the number of active cells .we will develop our procedure under the assumption that is known , or , at least bounded from above . while this can actually be a plausible assumption in practice, we will relax it in section [ sec:5 ] in which we will design a procedure to estimate a bound on . then under these assumptions we have the following result : [ prop : first.bound ] with defined by : this is just the bonferroni bound applied to the decomposition so that and the proposition results from and .the sets and will be designed using statistics and setting ] for some constants and , and assuming that the distribution of ( resp . ) is independent of for ( resp . ) . letting for and for , the previous upper bound becomes in the following sections our goal will be to design and such that this upper bound is smaller than a predetermined level .controlling the second term will lead to less conservative choices of the constant ( compared to the bonferroni estimate ) , as soon as ( or if all cells have comparable sizes ) .depending on the degree of clustering , the probability of false detection in the two - step procedure can be made much smaller than without harming the true detection rate and the coarse - to - fine procedure will yield an increase in power for a given fwer .we require tight estimates of and taking into account the correlation between and is necessary to deal with `` survivorship bias . ''in this section , the observation is a realization of an i.i.d .family of random variables where the s are real - valued and the variables is a high - dimensional family of variables indexed by the set .we assume that the distribution of , are independent and centered gaussian , with variance , and that where are i.i.d .gaussian with variance and , are unknown real coefficients. we will denote by the vector and by where is the vector composed by ones repeated times .we also let and , so that finally , we will denote by the common variance of and assume that it is known ( or estimated from the observed data ) . for ,we denote by the orthogonal projection on the subspace spanned by the two vectors and .we will also denote by ( ) the orthogonal projection on the subspace spanned by the vectors , , and .the scores at the level and level will be respectively : and ( the projections are simply obtained by least - square regression of on , for and on for . ) we now provide estimates of for and and for .note that , because we consider residual sums of squares , we here use large values of the scores in the rejection regions ( instead of small values in the introduction and other parts of the paper ) , hopefully without risk of confusion .[ prop:1 ] for all and : where is the cdf of a distribution evaluated at and : moreover where is the c.d.f . of a chi - squared distribution with degrees of freedom . for and , we can write because and .consider the conditional probability : the conditional distribution of given is gaussian ( where is the -dimensional identity matrix ) .denote by the projection on the orthogonal complement of in and by the projection on the orthogonal complement of in , so that and this implies that : at this stage , applying cochran s theorem to and , which are conditionally independent given , reduces the problem to finding an upper bound for : where is and is , and the two variables are independent .let us write this probability as which is less than : ( here , refers to the expectation with respect to . ) consider the first term in the sum : . this term can be re - written as : at this stage , we will use the following tail inequality for random variables : for any .we apply this result to and to get the upper bound : since the density of a is proportional to , the term in will cancel in the last integral ( expectation ) . using a simple change of variables in the remaining integral , we have as a final upper bound : where is the cdf of a beta(a , b ) evaluated at .+ the second upper - bound , for , is easily obtained , the proof being left to the reader .this leads us immediately to the following corollary : [ cor:2 ] with the thresholds and , an upper bound of the fwer is : figure [ fig : level.curves ] provides an illustration of the level curves associated to the above fwer upper bound .more precisely , it illustrates the tradeoff between the conservativeness at the cell level and the individual index level . in the next section , the optimization for power will be made along these level lines .figure 1 also provides the value of the bonferroni - holm threshold . for the coarse - to - fine procedure to be less conservative than the bonferroni - holm approach , we need the index - level threshold to be smaller , i.e. , the optimal point on the level line to be chosen below the corresponding dashed line .level curves of the upper bound of the fwer for the levels 0.2 ( blue ) , 0.1 ( green ) and 0.05 ( red ) .the horizontal dashed lines represent the thresholds at the individual level for a bonferroni - holm test , with corresponding colors . ] the derivation of is based on the assumption that we have a fixed cell size ( across all the cells ) , which is not needed . in the case where the size of the cell is varying , it is easy to generalize the previous upper bound . letting it suffices to replace in with where does not depend on the cell .equation provides a constraint on the pair to control the fwer at a given level .we now show how to obtain `` optimal '' thresholds that maximize discovery subject to this constraint .the discussion will also help understanding how active indices clustering in cells improve the power of the coarse - to - fine procedure .the conditional distribution of given is with .it follows from this that , conditionally to these variables , follows a non - central chi - square distribution , with where .using the fact that converges to we will work with the approximation with a similar analysis , and letting for , , we will assume that with we now have the simple lemma [ prop:3 ] if and , then for all such that , i=1,2 this is based on the inequality , valid for : which implies as soon as , and on the simple lower - bound this proposition can be applied , in our case , to and .more concretely , we fix a target effect size ( the ratio of the effect of compared to the total variance of ) , and a target cluster size , , that represents the number of active loci that we expect to find in an active cell , and we take and to optimize the upper - bound in subject to the fwer constraint and and to find optimal constants for this target case .this is illustrated with numerical simulations in the next section .figures [ fig : digraph.1 ] compares the powers of the coarse - to - fine procedure and of the bonferroni - holm procedure under the parametric model described in the section .comparison of the power of different methods .we compare the detection rate of an individual active index for different number of active indices in the cell containing that index .the coarse - to - fine method is more powerful when the number of active indices is two or greater .this confirms the intuition that the more the clustering assumption is true , the more powerful is the coarse - to - fine method compared to bonferoni - holm approcach . ]the parameters chosen in our simulations were taken with our motivating application ( to gwas ) in mind .thinking of as a phenotype , and as a set of snp s , we assimilate cells to genes .we used and .the true number of active variables is 50 with a corresponding coefficient for each of them , and we generate the data according to the linear model described in this section with a variance noise that is equal to 10 .we assumed that we knew an upper bound for the number of active sets ( this assumption is relaxed in section [ sec:6 ] ) . to compute the optimal thresholds , some values for and have to be chosen ( this should not be based on observed data , since this would invalidate our fwer and power estimates ) . in our experiments , we optimize the upper bound on the probability for an active variable to be detected in an active cell by choosing and .this corresponds to an `` almost non noisy case '' where the effect size of the `` gene '' is two times the effect size of the `` snp '' .recall that denotes the random variable representing all the data , taking values in .we will build our procedure from user - defined _ scores _ , denoted ( at the locus level ) and ( at the cell level ) , both defined on , i.e. , functions of the observed data .moreover , we assume that there exists a group action of some group on , which will be denoted for example , if , like in the previous paragraph , one takes and , we will take to be the permutation group of with to simplify the discussion , we will assume that is finite and denote by the uniform probability measure on , so that we note , however , that our discussion remains true if is a compact group , the right - invariant haar probability measure on and are continuous in .our running assumption will be that , 1 . for any , the joint distribution of is independent of .2 . for any , the joint distribution of is independent of .we will also use the following well - known result . [lem : basic ] let be a random variable and let denote the left limit of its cumulative distribution function , i.e. , .then , for ] .as mentioned above , this result does not have practical interest since it requires applying all possible permutations to the data . in practice ,a random subset of permutations is picked instead , and we will develop the related theory in the next section ( using these inequalities as intermediary results in our proofs ) .we now replace , and with monte - carlo estimates and describe how the upper bounds in theorem [ th : ideal ] need to be modified . for a positive integer , we let denote the -fold product measure on , whose realizations are independent group elements . we will use the notation and to denote probability or expectation for the joint distribution of and ( i.e. , ) .we will also denote by the empirical measure with this notation , we let and where we can now define and state : [ th : sample ] making the continuous approximation described in remark [ rem:1 ] , the following holds . for , and , for and , [ cor : sample ] the fwer for the randomized test is controlled by we start with which is simpler and standard .let .conditionally to , follows a binomial distribution ( with defined by equation ) , so that } } \binom{k}{j } \mathbb e\left(t_v({\mathbf u})^j(1-t_v({\mathbf u}))^{k - j}\right)\ ] ] where } ] conditionally to , so that is uniformly distributed on ] , we define .let be the set of active cells , and .note that .then , for , we therefore have , for , where is a centered random variable . the following proposition states that the process for has the covariance structure of a brownian bridge .+ [ prob : cov ] under assumptions a1 to a3 , we have , for , since , one has if , then , and for : finally , from , we get which concludes the proof .we now make a gaussian approximation for large of the vector with , with .[ prop : clt ] using the previous notation , when diverges to infinity , where is the covariance matrix with entries : our assumptions ensure that satisfies a central limit theorem conditionally to , with a limit , that is independent of the value of .this implies that the limit is also unconditional .we are now able to present our principal result which provides a high - probability upper bound for .[ th : j ] let be a randomization variable , independent of . for and ,define then where . as a consequence ,given , let . then is such that ( we use a randomly sampled value of ) .let us now prove this result .we know from proposition [ prop : clt ] that the vector where then , since min is a continuous function on , we deduce that : but : the process , is a martingale and is a submartingale for all s . applying doob s inequality ,then optimizing over s finally gives : it remains to prove that but : then : solving the quadratic inequality for , one finds that is equivalent to , which completes the proof .the previous section provided us with an estimator in such that with probability larger than , which implies that with probability at least .we previously chose constants and by optimizing the detection rate on a well - chosen alternative hypothesis subject to the upper - bound being less than a significance level .this was done using a deterministic upper - bound of , but can not be directly applied with a data - based estimation of since this would yield data - dependent constant and , which can not be plugged into the definition of the set without invalidating our estimation of the fwer . in other terms , if , for a fixed number , one defines to be the discovery set obtained by optimizing and subject to , our previous results imply that for all , but not necessarily that .a simple way to address this issue is to replace with because with probability at least , we have so that controls the fwer at level as intended .we check that conditions a1 and a2 are satisfied for the two situations that we consider in this paper . in the example from section [ sec:3 ] , we can take ( using the same notation and introducing the c.d.f . of a chi - square distribution ) ( recall that is the orthogonal projection on the space generated by and .we also let be the empirical variance of . )note that the conditional distribution of given is always uniform over $ ] and therefore does not depend on , which proves that and are independent .similarly , taking , and are conditionally independent given ( because and are independent ) .but and being conditionally independent given and each of them independent of implies that the three variables are mutually independent . + the same argument can be applied to the non - parametric case , when ( now using notation from that section ) one assumes that scores are such that , and uses to simplify the discussion the statistic assuming , in addition , the following .if we denote by the space where the random variable takes its values , there exists a group , a group isomorphism between and and a group action of on that we will denote by satisfying the two following conditions : * the distribution of is invariant under the action .* for example , for permutation tests , the group is simply the group of permutations itself .the isomorphism is the inverse map .the group action is just the permutation of the observations .finally , can be any score that is symmetric with respect to the observations . + assuming these conditions, one can immediately apply lemma to conclude .given a partition of the space of hypotheses , the basic assumption which allows the coarse - to - fine multiple testing algorithm to obtain greater power than the bonferroni - holm approach at the same fwer level is that the distribution of the numbers of active hypotheses across the cells of the partition is non - uniform .the gap in performance is then roughly proportional to the degree of skewness .the test derived for the parametric model can be seen as a generalization to coarse - to - fine testing of the f - test for determining whether a set of coefficients is zero in a regression model ; the testing procedure derived for the non - parametric case is a generalization of permutation tests to a multi - level multiple testing .this scenario was motivated by the situation encountered in genome - wide association studies , where the hypotheses are associated with genetic variations ( e.g. , snps ) , each having a location along the genome , and the cells are associated with genes . in principle , our coarse - to - fine procedure will then detect more active variants to the extent that these variants cluster in genes .of course this extent will depend in practice on many factors , including effect sizes , the representation of the genotype ( i.e. , the choice of variants to explore ) as well as the phenotype , and complex interactions within the genotype .it may be very difficult and uncommon to know anything specific about the expected nature of the combinatorics between genes and variants . in some sense, `` the proof is in the pudding , '' in that one can simply try both the standard and coarse - to - fine approaches and compare the sets of variants detected .given tight control of the fwer , everything found is likely to be real .indeed , the analytical bounds obtained here make this comparison possible , at least under linear model commonly used in gwas and in a general non - parametric model under invariance assumptions .looking ahead , we have only analyzed the coarse - to - fine approach for the simplest case of two - levels and a true partition , i.e. , non - overlapping cells .the methods for controlling the fwer for both the parametric and non - parametric cases generalize naturally to multiple levels assuming nested partitions .the analytical challenge is to generalize the coarse - to - fine approach to overlapping cells , even for two levels : while our methods for controlling the fwer remain valid , they are likely to become overly conservative if cell overlap .this case is of particular interest in applications , where genes are grouped into overlapping `` pathways . '' for example , in `` systems biology , '' cellular phenotypes , especially complex diseases such as cancer , are studied in the context of these pathways and mutated genes and other abnormalities are in fact known to cluster in pathways ; indeed , this is the justification for a pathway - based analysis .hence the clustering properties may be stronger for variants or genes in pathways than for variants in genes .michael ashburner , catherine a ball , judith a blake , david botstein , heather butler , j michael cherry , allan p davis , kara dolinski , selina s dwight , janan t eppig , et al .gene ontology : tool for the unification of biology ., 25(1):2529 , 2000 . | we analyze control of the familywise error rate ( fwer ) in a multiple testing scenario with a great many null hypotheses about the distribution of a high - dimensional random variable among which only a very small fraction are false , or `` active '' . in order to improve power relative to conservative bonferroni bounds , we explore a coarse - to - fine procedure adapted to a situation in which tests are partitioned into subsets , or `` cells '' , and active hypotheses tend to cluster within cells . we develop procedures for a standard linear model with gaussian data and a non - parametric case based on generalized permutation testing , and demonstrate considerably higher power than bonferroni estimates at the same fwer when the active hypotheses do cluster . the main technical difficulty arises from the correlation between the test statistics at the individual and cell levels , which increases the likelihood of a hypothesis being falsely discovered when the cell that contains it is falsely discovered ( survivorship bias ) . this requires sharp estimates of certain quadrant probabilities when a cell is inactive . , |
coherence is ultimately the most distinctive feature of quantum systems .finding a proper measure of the coherence present at different times scales in a quantum dynamical system is the first essential step for assessing the role of quantum interference in natural and artificial processes .this is of particular relevance for those quantum evolutions in which information or energy are transformed and transferred in order to achieve a given task with high efficiency . in this contextthe relevant questions are : how much and what kind of coherence is created vs destroyed by the dynamical evolution ? how does coherence determine / enhance the performance of the given process ?these are in general difficult questions and to be answered they require an appropriate and sufficiently comprehensive framework . a general and fundamental formalism to describe quantum interference is provided by the decoherent histories ( dh ) approach to quantum mechanics .dh have mainly found applications to foundational issues of quantum mechanics such as the formulation of a consistent framework to describe closed quantum systems , the emergence of classical mechanics from a quantum substrate , the solution of quantum paradoxes , decoherence theory , quantum probabilities . however , dh can also be a systematic tool for quantifying interference in quantum processes , and discussing its relevance therein . indeed ,dh provide a precise mathematical formalization of interference by means of the the so called _ decoherence matrix _ .the latter is built on the elementary notion of _ histories _ and allows one to describe the quantum features vs the classical ones in tems of interference between _ histories _ , or _ pathways _ if one resorts to the mental picture of the double slit experiments .it is however difficult to quantify in a compact and meaningful way the content of and its implications for the dynamics of specific systems .our first main goal is therefore to define and test appropriate measures allowing for the investigation of how interference can determine the performance of a given quantum information processing task .starting from and by its sub - blocks we define different functionals .in particular , we introduce a global measure of coherence able to describe the coherence content of a general quantum evolution at its various time scales ; an average ( over different time - scales ) measure of coherence ; and average mesure of interference between histories leading to a specific output . while the tools we introduce are of general interest and application , in order to test them we apply them to a specific but relevant instance of quantum dynamics taking place in photosynthetic membranes of bacteria and plants : quantum energy transport . herethe basic common mechanism is the following : a quantum excitation is first captured by the system and then migrates through a network of sites ( chromophores ) towards a target site , e.g. , a reaction center , where the energy is transformed and used to trigger further chemical reactions .there is now an emerging consensus that efficient transport in natural and biologically - inspired artificial light - harvesting systems builds on a finely tuned balance of quantum coherence and decoherence caused by environmental noise , a phenomenon known as environment - assisted quantum transport ( enaqt ) .this paradigm has emerged with clarity in recent years , as modern spectroscopic techniques first suggested that exciton transport within photosynthetic complexes might be coherent over appreciable timescales .indeed , a growing number of experiments has provided solid evidence that coherent dynamics occurs even at room temperature for unusually long timescales ( of the order of ) .efforts to describe these systems have led to general models of enaqt , depicting the complex interplay of three key factors : coherent motion , i.e. , quantum delocalization of the excitation over different sites , environmental decoherence , and localization caused by a disordered energy landscape .so far , the presence of coherence in light - harvesting systems has been qualitatively associated to the observation of distinctive ` quantum features ' .originally , coherence was identified with ` quantum wavelike ' behavior as reflected by quantum beats in the dynamics of chromophore populations within a photosynthetic complex .later works , employing quantum - information concepts and techniques , have switched attention towards quantum correlations between chromophores , in particular quantum entanglement .besides being open to criticism ( see , e.g. , ) , these approaches do not provide direct quantitative measures of coherence in the presence of noise .therefore , in what follows , we shall apply the novel tools based on dh to a simple yet fundamental model of quantum energy transfer .we will focus on a relevant trimeric subunit of the fenna - matthews - olson ( fmo ) complex , the first pigment - protein complex to be structurally characterized .the trimer is virtually the simplest paradigmatic model retaining the basic charcteristics of a disordered transfer network and it can also be conceived as an essential building block of larger networks . for simplicity , we will use the well - known haken - strobl model to describe the interplay between hamiltonian and dephasing dynamics .while the model is an oversimplified description of the actual dynamics taking place in real systems , it allows to spot out the essential features that may determine the high efficiency of the transport .we shall initially focus on a new coherence measure , based on the decoherence matrix , and characterize its behavior verifying that it can consistently identify the bases and timescales over which quantum coherent phenomena are present during the evolution of the system .we shall then show how the average coherence exhibited on those time scales can be connected with the delocalization process .a more detailed analysis will be aimed at distinguishing between constructive and destructive interference affecting the histories ending at the site where the excitation exits the photosynthetic structure . by using the decoherence functional, we will show that the beneficial role of dephasing for the transport efficiency lies in a selective suppression of destructive interference , a fact that has been systematically suggested in the literature , but never expressed within a general and comprehensive framework that allows the quantitative evaluation of coherence and its effects .the application of the introduced tools and methods based on dh to a simple yet paradigmatic system shows how one can properly quantify the coherence content of a complex quantum dynamics and elucidate the role of coherence in determining the overall efficiency of the process .the paper is structured as follows . in section [ sec : decoherent - histories ] we review the basic decoherent histories formalism . in section[ sec : the - coherence - measure ] we define the measure of coherence and describe its meaning and properties . in section [ sec : trimer ]we first introduce the used model for describing the energy transport in the selected trimeric complex .we then discuss the coherence properties of the excitonic transport : by means of the appropriate measures based on the decoherent histories formalism we identify the essential features that may determine the high efficiency of the transport . in section [ sec : fmo ] we briefly discuss how to extend our results to the whole fmo complex . in section [ sec : conclusions ] we summarize our results an draw our conclusions .the formalism of decoherent ( or consistent ) histories was developed in slightly different flavors by griffiths , gell - mann , hartle and omns .dh provide a consistent formulation of quantum mechanics where probabilities of measurement outcomes are replaced by probabilities of _ histories_. in this formulation , external measurement apparatuses are not needed , and then one does not need to postulate a classical domain of observers . as a consequence ,quantum mechanics becomes a theory that allows the calculation of probabilities of sequences of events within any closed system , including the whole universe , without the necessity of invoking postulates about the role of measurement . in this framework ,the classical domain can be seen to emerge as the description of the system becomes more and more coarse - grained .the idea of ` histories ' stems from feynman s ` sum - over - histories ' formulation of quantum mechanics .as is known , any amplitude between an initial and a final state can be expressed as a sum over paths , or histories : upon inserting the identity decomposition at differerent times we get where we use the heisenberg notation .thus the total amplitude is decomposed as a sum of amplitudes , each one corresponding to a different _ history _ identified by a sequence of projectors .the decoherent histories formalism assumes that histories are the fundamental objects of quantum theory and gives a prescription to attribute probabilities to ( sets of ) histories .a history is defined as a sequence of projectors at times .probabilites can be assigned within exhaustive sets of exclusive histories , i.e. , sets of histories where subscripts label different alternatives at times .histories are exhaustive and exclusive in the sense that the projectors at each time satisfy relations of orthogonality , and completeness , . in other words ,the projectors define a projective measurement . within a specified set, any history can be identified with the sequence of alternatives realized at times .different alternative histories can be grouped together with a procedure called _ coarse - graining_. starting from histories and we can define a new , _ coarse - grained _history by summing projectors for all times such that and differ : for all . by iterating this procedure, one can obtain more and more coarse - grained histories .a special type of coarse - graining is the _ temporal coarse - graining _ : we group together histories such that such that at some time we have .then the coarse - grained history contains only one - projector ( equal to the identity ) at time , that can be neglected and hence removed from the string of projectors defining the history . on the other hand _ temporal fine - graining _ can be implemented for example by allowing different alternatives at a times . in particular, one can create new sets of histories from a given one by adding different alternatives at time ; the sets are fine grained versions of the sets .+ once we specify the initial state and the ( unitary ) time evolution , we can assign any history a _ weight _ ,\quad\mbox{with}\quad c_{{\mathbf{j}}}=p_{j_{n}}(t_{n})\dots p_{j_{1}}(t_{1})\ ] ] where we use the heisenberg notation .when the initial state is pure , and the final projectors are one dimensional , , this formula takes the simple form of a squared amplitude weights can not be interpreted as true probabilities , in general . indeed ,due to quantum interference between histories , the do not behave as classical probabilities .indeed , consider two exclusive histories and the relative coarse - grained history by : .if the were real probabilities , we would expect .instead , what we find is ).\ ] ] due to the non - classical term ) ] is the probability that the system is in at time .due to interference , the probability of being in at time is _ not _ simply the sum of probabilities of all alternative paths leading to , i.e , of all alternative histories with final projection . in formulas , the probability and the global interference of histories ending in can be thus expressed as with .destructive interference will happen when , constructive interference when .the decoherent histories formalism is consistent with and encompasses the model of environmentally induced decoherence . given a factorization of the hilbert space into a subsystem of interest and the rest ( environment ) , , the events of a history take the form where and are projectors onto hilbert subspaces of and respectively .histories for alone can be obtained upon considering appropriate coarse - grainings over the degrees of freedom of the environment , such that the events are where is the identity over . upon introducing the time - evolution propagator as ] evolves according to where is the ( non - unitary ) reduced propagator defined by =\tilde{\mathcal{k}}_{t\ t_{0}}[\tilde{\varrho}_s(t_{0 } ) ] .\end{small}\ ] ] if the evolution of the system and environment is markovian , we can write .as proved by zurek , under the assumption of markovianity we can rewrite the decoherence matrix in terms of reduced quantities alone , i.e. , quantities pertaining to the system only : \dots]\tilde{p}_{k_{n-1}}]\tilde{p}_{k_{n } } ] .\label{eq : reduceddec}\end{aligned}\ ] ] that is , the model of environmentally induced decoherence can be obtained by applying the decoherent histories formalism to system and environment together , and by coarse - graining over the degrees of freedom of the environment .the dh approach provides the most fundamental framework in which the transition from the quantum to the classical realm can be expressed .indeed , it is based on the most basic feature characterizing the quantum world : interference and the resulting coherence of the dynamical evolution . despite being a well developed field of study, the dh history approach lacks for a proper global measure of the coherence produced by the dynamics at the different time scales .we therefore introduce a measure that quantifies the global amount of coherence within a set of histories .assume projectors for all times are taken in a fixed basis , .assume further that histories are composed by taking equally spaced times between consecutive projections i.e. , .( in other words , histories correspond to projections applied in the same basis and repeated at regular times ) .for such a set of histories , consider the decoherence matrix \ ] ] where .take the von neumann entropy of the decoherence matrix , .\end{aligned}\ ] ] due to coherence between histories , differs from the ` classical - like ' shannon entropy of history weights where $ ] are the diagonal elements of , i.e. , the weights .the difference between the two quantities is wider if off - diagonal elements of the decoherence matrix are bigger , i.e. , if the set of histories is more coherent . let us define : we argue that is suitable to be used as a general measure of coherence within the set of histories defined by .indeed , we can readily prove the following properties : + i ) . is obvious . to prove ,let us define a matrix where off - diagonal entries are set to zero .since & = \sum_{{\mathbf{j}}}d_{{\mathbf{j}},{\mathbf{j}}}^{(n , p,\delta t)}\log d_{{\mathbf{j}},{\mathbf{j}}}^{(n , p,\delta t)}= \\ & = { \mbox{tr}}[\tilde{\mathcal{d}}^{(n , p,\delta t)}\log\tilde{\mathcal{d}}^{(n , p,\delta t)}]\end{aligned}\ ] ] we obtain that the numerator of ( [ eq : cfunction ] ) can be expressed as a quantum relative entropy : +{\mbox{tr}}[{\mathcal{d}}^{(n , p,\delta t)}\log{\mathcal{d}}^{(n , p,\delta t ) } ] \\ & = { \mbox{tr}}[\tilde{\mathcal{d}}^{(n , p,\delta t)}(\log\mathcal{d}^{(n , p,\delta t)}-\log\tilde{\mathcal{d}}^{(n , p,\delta t)})]\\ & = h(\mathcal{d}^{(n , p,\delta t)}||\tilde{\mathcal{d}}^{(n , p,\delta t)})\geq0\end{aligned}\ ] ] where is the relative entropy between and .+ ii ) iff , i.e. , vanishes if medium decoherence holds for the set of histories , since the two quantities and coincide in this case .thus is in essence a ( statistical ) distance between the decoherence matrix and the corresponding diagonal matrix , renormalized so that its value lies between and .the greater are the off - diagonal elements of , the greater the distance .the meaning of can be easily understood if we use the linear entropy , a lower bound to the logarithmic version : ,\\ \ & 1-h_{l}^{(c)}(p , n,\delta t)={\mbox{tr}}[(\tilde{{\mathcal{d}}}^{(n , p,\delta t)})^{2}].\end{aligned}\ ] ] in this case , we obtain a ` linear entropy ' proxy of as : which is a simplified version that , by avoiding the diagonalization of , helps containing the numerical complexity .the measure introduced is well grounded on physical considerations . in the followingwe will apply it to a simple system in order to check its consistency , and later use it to characterize the coherence properties of the evolutions induced by various regimes of interaction with the environment .first , one has to check whether the measure properly takes into account the action of the bath .in particular , if the bath is characterized by a decoherence time , it is known ( ) that on time scales the decoherence matrix becomes diagonal : the probability of a history at time can be fully determined by its probability at time , since no interference can occur between different histories . indeed , the action of the bath is to create a _decoherent set of histories _ that are defined by a proper projection basis : the pointer basis ( ) .therefore , the fine - graining procedure obtained by constructing a set of histories via the addition of a new complete set of projections in the same basis at time to the set , should leave the coherence functional invariant , i.e. , .if instead the same fine - graining procedure should lead to . before passing to analyze a specific system , we want to focus on the complexity of the evaluation of and .the dimension of the decoherent matrix grows with the dimension of the basis and the number of time instants that define each history as . this exponential growth in principle limits the application of the dh approach to small systems .however , as for the system considered in this paper the computational effort is contained due to the small number of subsystems ( chromophores ) and the small dimension of the hilbert space which is limited to the single - exciton manifold . as we shall see , by limiting the choice of to a reasonable number , the analysis can be fruitfully carried even on a laptop .we now start to analyze decoherent histories in simple models of energy transfer comprising a small number of chromophores ( sites ) . neglecting higher excitations, each site can be in its ground or excited state .we work in the single - excitation manifold , and define the site basis as i.e. , state represents the exciton localized at site . on - site energies and couplingsare represented by a hamiltonian that is responsible for the unitary part of the dynamics .interaction with the environment is implemented by the haken - strobl model , that has been extensively used in models of enaqt .the effect of the environment is represented by a markovian dephasing in the site basis , expressed by lindblad terms in the evolution , as follows : +\sum_{i}\gamma_{i}[2l_{i}\varrhol_{i}^{\dag}-l_{i}^{\dag}l_{i}\varrho-\varrho l_{i}^{\dag}l_{i}]\label{eq : hakenstrobl}\ ] ] where are projectors onto the site basis , and are the ( local ) dephasing rates .furthermore , site can be incoherently coupled to an exciton sink , represented by a linblad term \ ] ] where and is the trapping rate .contrary to other works , we neglect exciton recombination , as it acts on much longer timescales than dephasing and trapping .+ the global evolution is markovian and can be represented by means of the liouville equation that can be simply solved by exponentiation , in the notation above , the propagator has the form .the efficiency of the transport can be evaluated as the leak of the population of the exit site towards the sink : the overall efficiency of the process is obtained by letting while its markovianity limits the faithful description of decoherence processes actually taking place in real photosynthetic systems , _ _ the model retains the basic and commonly accepted aspects of decoherence , that acts in the site basis : albeit in a complex non - markovian way , the protein enviroment measures the system locally ( i.e. , on each site ) , thus _ _ destroying the coherence in the site basis and creating it in the exciton basis . note that the formalism can also be applied to a ` dressed ' or polaronic basis where we include strong interactions between chromophores and vibrational modes .that is , to apply the dh method , one only needs a model in which an exciton hops between sites , dressed or undressed .the model is therefore suitable to readily implement the decoherent histories paradigm and to spot the main basic features we are interested in and that are at the basis of the success of enaqt .the fmo unit has chromophores and a complex energy and coupling landscape with no symmetries .energies and couplings ( i.e. , the hamiltonian ) can be obtained by different techniques : they can be extracted by means of 2d spectroscopy as in or computed through ab initio calculations as in , with similar but not exactly equal results .this very complex struxture makes fmo far from ideal as a first example to study .we thus prefer to start by working with a much simpler , yet fully relevant subsystem : the trimeric unit composed by the sites and of the fmo complex in the notation of ) .the first chromophore is the site in which the energy transfer begins , while the third chromophore is the site from which the excitation leaves the complex .the hamiltonian of the trimeric subunit is the eigenenergies of the system ar given by * * which yields the eigenperiods . due its structure ,the trimer is a chain composed by a pair of chromophores ( ) , degenerate in energy and forming a strongly coupled dimer , and a third chromophore moderately coupled with the second one only . since in the followingwe suppose that the exciton starts from site , we expect a prominent role of the dimer in the dynamics , at least in the first tens of femtoseconds . + in order to show how the dh analysis can be implemented , in the following we are going to consider histories in the site and the energy bases , with projections at times .we first use the coherence function introduced above ( [ eq : cfunction ] ) to evaluate the global coherence of the exciton transport process . in order to test the behavior of for different values of dephasing , in fig [ fig : trimer renger c function ] we first plot as a function of the time interval between projections for two values of the dephasing rate : , corresponding to the full quantum regime ( fig.[fig : trimer renger c function ] ) for the site basis ( a ) ; corresponding to an intermediate value of dephasing ( fig . [ fig : trimer renger c function ] ) for the site basis ( b ) and the energy basis ( c ) . before entering the discussion of the various regimes , we note that as a function of the number of projections all curves display the expected behavior : the increase ( decrease ) of the number of projections corresponds to a temporal fine - graining ( coarse - graining ) of the evolution ; therefore , an increase ( decrease ) of should imply an increase ( decrease ) of the amount of coherence between histories . as shown in fig .( [ fig : trimer renger c function ] ) the function correctly reproduces the fine ( coarse ) graining feature : the qualitative behavior of as a function of is not affected by the choice of , while an increase of corresponds , at fixed , to an increase of .we will therefore use in the following the value that allows for a neat description of the phenomena and for a reasonable computational time . as for the behavior at fixed , we have that in the full quantum regime ( ) , the system obviously displays coherence in the site basis only since ={\langle e_{i}|}\rho{|e_{i}\rangle}\delta_{i , j}\ ] ] the decoherence matrix in the energy basis is diagonal and independent on and .this simply means that in the full quantum regime histories in the exciton basis are fully decohered , * * since the system is not able to create coherence among excitons . still in the full quantum regime , in the site basis , the coherence oscillates as the exciton , starting at site , goes back an forth along the trimer , and the evolution builds up coherence in this basis , see fig.[fig : trimer renger c function ] ( a ) . in this regime ,the trimer can be approximately seen as a dimer composed by the first two chromophores , and the exciton performs rabi oscillations with a period given by ; oscillates with half the period : for the exciton is migrated mostly on site and has a minumum which is different from zero since the exciton is partly delocalized on site , and the system therefore exhibits a non vanishing coherence . for intermediate values of , fig.[fig : trimer renger c function ] ( b ) , the coherence in site basis as measured by correctly drops down at .the dephasing has a strong and obvious effect on the coherence between pathways : coherence in this basis is a monotonically decreasing function of .this is well highlighted by the global coherence function , whose maximal values are reduced by a factor of with respect to those corresponding to full quantum regime . after a time the histories are fully decohered . indeed , due to the specific model of decoherence ( [ eq : hakenstrobl ] ) , which amounts to projective measurements on at each site with a rate , the system kills the coherence in the site basis , which in turn corresponds to the stable pointer basis for this model , i.e. , the basis in which the density matrix is forced to be diagonal by the specific decoherence model . on the other hand , and for the same reason , the dynamics starts to build up coherence in the exciton basis , see fig .[ fig : trimer renger c function](c ) .however , this coherence is later destroyed - on a time scale of approximately - since the stationary state of the model is the identity .this effect is even more evident if one compares the behavior of in the exciton basis for different values of , as shown in fig .[ fig : trimer renger c function](d ) : grows with and it lasts over longer time scales .this feature is coherent with the expectations : the equilibrium state for high is the identity . due to the projections implemented by the environment in the site basis ,the system is forced to create coherence in the exciton basis .when is very high a quantum zeno effect takes in , the dynamics is blocked , and the time required to reach the equilibrium , and to destroy coherences in all bases , consequently grows .this first analysis therefore shows that is indeed a good candidate for assessing the global coherence properties of quantum evolutions . for a fixed number of projections , can be interpreted as a _ measure of the global coherence exhibited by the dynamics over the time scale _ .we now analyze in detail the specific features of quantum transport for the trimer .the dynamics starts at site and evolves by delocalizing the exciton on the other chromophores . in order to study this process, we first use a measure of delocalization introduced in for the study of lhcii complex dynamics : that is simply the shannon entropy of , the populations of the three chromophores .this measure allows one to follow how much the exciton gets delocalized over the trimer with time and in different dephasing situations : , i.e. , is zero when the exciton is localized on a chromophore and it takes its maximal value when the population of the three sites are equal . in fig .: pop site 3 and delocalization ] we plot both and the population of site for different values of . due to the presence of interference , in the mainly quantum regime ( ) , the exiton first delocalizes mainly over the dimer and partly on the third site : the first maximum corresponds to when the system builds up a ( close to uniform ) coherent superposition between sites and , while a non negligible part of the exciton is found in site ; indeed , the last value corresponding to i.e. , to a uniform superposition over the sites and only .as the dynamics of the sytems extends to later times we see that and have an oscillatory behavior , whose main period is , and which approximately corresponds to rabi oscillations between site and , although the initial state fully localized in site can not be rebuilt due to the presence of site . as for the transport , we see that in this regime the system can not take advantage of the initial fast and high delocalization : the exciton bounces back and forth over the trimer . in the intermediate regime , due ,as we will later see , to the selective suppression of interference processes , the initial speed up in delocalization is sustained by the dynamical evolution , and the transfer rate to site is correspondingly increased . for very high values of decoherence ( )the role of initial interference is suppressed and the initial speed - up disappears : the environment measures the system in site basis at high rates and the delocalization process is highly reduced .the optimal delocalization occurs in correspondence of and it can be interpolated with a double exponential function with .the first time scale describes the initial fast quantum delocalization process described above , while the second time scale the slower subsequent delocalization and the reaching of the equilibrium situation , .we now pass to systematically analyze the behavior of the coherence of the evolution with respect to the strength of the interaction with the environment and its relevance for the energy transport process . as a first step we plot both and for different values of , fig .( [ fig . :h and c various gamma ] ) .the plots show that the coherence function exhibits the required behavior : for small , oscillates with period , following the rabi oscillations of the dimer .the minima occur at , showing that the exciton is `` partially '' localized on site or , and partially delocalized on site .as grows , the system becomes unable to create coherence on large time scales ; the decay of is mirrored by a the reduction of the amplitude in the oscillations of . and coherence function for ,scaledwidth=40.0% ]we now focus on the relevant time scales for the initial fast delocalization process highlighted by our previous analysis , which are of the order of tens to hundreds of femtoseconds .we therefore introduce the following _ average measure of global coherence of the evolution _ is the average of the coherence exhibited by the dynamics of the system at the time scales . in fig .average c vs gamma ] we show for the trimer ( [ eq : hamtrimer renger ] ) in the site basis for different values of .* * we first focus on the behavior of for values of dephasing in the range . in this range , for small timescales to the average global coherence is approximately constant and equals the value attained in the full quantum regime i.e. , . for larger time scales ( ) rapidly decreases with .this analysis shows that the behavior of matches the expectations : the higher the smaller the time scales over which decoherence takes place , the lower the global coherence of the dynamics .along with the functional is therefore in general a good candidate for the evaluation of the global coherence of open quantum systems evolution .as for the transport dynamics , we focus on the timescale identified with the analysis of for optimal dephasing ; for and we see that the system indeed retains most of the average coherence of the purely quantum regime up to the optimal values of decoherence ( in the figure ) , losing it afterwards ; this is a clear indication that this phenomenon is at the basis of the the fast initial delocalization process . over longer time scales ,the relevance of coherence is highly suppressed**. * * for the trimer ( [ eq : hamtrimer renger ] ) for different values of ,scaledwidth=40.0% ] we now deepen our analysis about the relevance of the coherence of the evolution for the energy tranfer efficiency . to this aimwe focus on the basic feature that distingushes the classical and the quantum regime : interference .in particular we focus on the sub - block of the decoherence matrix pertaining to the third chromophore , which describes the set of histories in site basis ending at site . due to interference the probability of occupation of the site at time can be written in terms of the the histories ending at site , see ( [ eq : interference and probability at ending site j ] ) . in fig .: interference with gamma trimer ] we show for different values of dephasing .one has different regimes : for , the set of histories in site basis is fully decohered ; , the histories do not interfere with each other and i.e , the probability is simply the sum of diagonal elements of . in the mainly quantum regime , : after the initial positive peak the histories interfere with each other , globally the interference is mostly negative and therefore . for intermediate values of decoherence interference has a positive peak and then reduces to zero . while the first initial fingersnap of positive interference that takes place in the first is common for all curves corresponding to small and intermediate values of , the _main effect of the bath is displayed after this initial period of time : the decoherence gradually suppresses interference , both the positive and the negative one _ ; _ however , for intermediate values of the effect is stronger as for the negative part of the interference patterns_. the environment thus implements what can be called a _ quantum recoil avoiding effect : _ it prevents the part of the exciton that thanks to constructive interference has delocalized on site to flow back to the the other sites . ) * ( a ) * average positive ( green ) , negative ( blue ) , total ( red ) interference of histories ending in site as a function of ; * ( b ) * total average interference for site ( red ) , ( blue ) , ( green ) .,title="fig:",scaledwidth=40.0% ] ) * ( a ) * average positive ( green ) , negative ( blue ) , total ( red ) interference of histories ending in site as a function of ; * ( b ) * total average interference for site ( red ) , ( blue ) , ( green ) .,title="fig:",scaledwidth=40.0% ] in order to evaluate a possible advantage provided by the initial speed - up in the delocalization process and by the interference phenomena showed above one has to take into account another relevant time scale of the transport process : the trapping time . indeed ,if the system is to take advantage of the fast delocalization due to the coherent behavior , the exit of the exciton should take place on time scales of the order of the delocalization process .the theoretical and experimental evidences show that this is the case : the trapping time for the fmo complex is estimated in the literature to be of the order of i.e. , the exit of the exiton starts soon after the fast delocalization due to quantum coherence has taken place .the role of the interference between paths , in particular those leading to site , can therefore be appreciated by numerically evaluating i.e. , the average over the trapping time scale of of the total ( ) , negative ( ) and positive ( ) average interference between the histories ending in site , with . in particular , in fig .[ fig . : average interference site 3](a ) the different kinds of interference are plotted for histories terminating at site : on average , the negative interference highly reduces the total interference for small values of decoherence strength ; when , vanishes , the average total interference equals the positive one , and it is maximal for values of comparable to those that maximize ( ) . in fig .[ fig . : average interference site 3](b ) , we compare the behavior of for all sites .the results again suggest that decoherence acts on the interference provided by the quantum engine in order to favor the flow of the exciton towards the exit chromophore : the average positive interference between histories ending at sites and grows in modulus with and attains a maximum for intermediate values of decoherence ; while the average negative interference between histories ending at site 1 decreases and attains a minimum for intermediate values of .the combined effect of decoherence and interference thus helps depopulating site 1 and populating site 2 and 3 .we can now tackle one of the most relevant aspects of our discussion : the net effect of the above described phenomena on the overall efficiency of the transport .the latter can be fully appreciated by evaluating the efficiency of the process ( [ eq : efficiency ] ) and by recognizing that , in the decoherent histories language , it can be expressed as : where and .this split allows one to appriciate the role of interference for the efficiency . in fig .( [ fig . : efficiency vs interference trimer ] ) is plotted for different values of dephasing . in agreement with what discussed above, we have three regimes : for very small values of the overall efficiency is poor ; this is due to the presence of high negative interference that in average prevents the exciton to migrate to the exit site . for large values of interference processes are completely washed out and the system can not take advantage of the fast quantum delocalization . for intermediate ( optimal ) values of the negative interference has been washed out : is positive , it acts on short time scales , and it provides on average an enhancement of the global efficiency . ):transport efficiency , integrated weight and integrated interference for pathways ending at site for different values of and ,scaledwidth=40.0% ] these results , within the limits of the simple model of decoherence taken into account , undoubtedly show for the first time that the so called enaqt phenomenon can well and properly be understood both qualitatively and quantitatively within the decoherence histories approach , i.e. , in terms of very the basic concepts of coherence and interference between histories .the often recalled `` convergence '' of time scales or `` goldilocks '' effect ( ) in biological quantum transport systems seems therefore to be well rooted in the processes discussed above : if decoherence is too small the system shows both positive and negative interference ( see fig .: pop site 3 and delocalization ] ) , the delocalization has an ocillatory behavior , and the exciton bounces back and forth along the network thus preventing its efficient extraction .if instead decoherence is very high one has that the complete washing out of intereference and coherence implies the delocalization process to be very slow , no matter how fast the trapping mechanism try to suck the exciton out of the system . in order to take advantage of the effects of quantum coherent dynamics : the bath must act on the typical time scales of quantum evolution in order to implement the quantum recoil avoiding process ; the extraction of the exciton from the complex , characterized by , must then start soon after the initial fast delocalization has taken place .should the extraction take place on longer time scales , the benefits of the fast initial delocalization would be spoiled : waiting long enough , the system would eventually reach together with equilibrium a decent delocalization even for moderately high values of , but in this case the transfer would be obviously much slower .the above arguments can be easily applied to the whole fmo complex . fig . [fig : fmo c various gammas and c vs h ] and [ fig : fmo average i on 200 fs 3 site and efficiency ] show the application of the decoherent histories method to excitonic transport in fmo .the main features of the behavior of and are maintained although obvious differences can be found since the dynamics in now determined by the interplay of different eigenperiods and interference paths are more complex .in particular , fig.[fig : fmo c various gammas and c vs h](b ) , one can observe a revival of positive interference for small values of , that does enhance the efficiency for , fig .[ fig : fmo average i on 200 fs 3 site and efficiency](a ) ; but this is not sufficient to compensate the initial and subsequent negative interference , thus impeding the reach of optimal values of . in general , compared to the trimer and as suggested by fig .[ fig : fmo average i on 200 fs 3 site and efficiency](a ) the maximum average positive coherence on short time scales is attained for smaller values of .the overall picture is not significantly affected if one decides to start the dynamics from site instead of site , as it often is reported in the literature .the decoherent histories approach provides a general theory to study the distinctive feature exhibited by quantum systems : coherence . however , despite its generality and foundational character , in order to measure the effects of coherence and decoherence the dh approach needs to be complemented with a quantitative way to condense the information contained in the basic object of the theory , i.e. , the decoherence matrix . in this paperwe introduce a set of tools that allow one to assess the ( global ) coherence properties of quantum ( markovian ) evolution and that can be used to relate the coherence content of a general quantum dynamical process to the relevant figure of merits of the given problem .we first define the _ coherence functional _ , that can be interpreted as a measure of the global coherence exhibited by the dynamics in the basis over the time scale _ ._ while this measure is completely general , one can further introduce other relevant tools tailored to the specific system and type of system - environment interaction at hand .we thus focus on a simple yet paradigmatic model of environmentally assisted energy transfer where coherence effects have been shown to play a significant role in determining the efficiency of the process : a trimeric subunit of the fenna - matthews - olson photosynthetic complex .based on and we define : a measure able to characterize the average coherence exhibited by the dynamics of the system over the time scales for a fixed value of the dephasing ; a measure of the average interference occurring between the histories ending at a given `` site '' .+ within the specific model , we first thoroughly assess the consistency of the behavior of in the various regimes .we then show how the introduced tools allow to study the intricate connections between the efficiency of the transport process and the coherence properties of the dynamics .in particular we show that the delocalization of the exciton over the chromophoric subunit is strongly affected by the amount of ( average ) coherence allowed by the interaction with the bath in the first tens to hundreds of femtoseconds .if the system - bath interaction is too strong , coherence is suppressed alongside the interference between different histories , in particular those ending at the site where the excitation leaves the complex .if the interaction is too weak the system exhibits high values of coherence even on long time scales , but it also exhibits negative interference between pathways ending at the exit site , a manifestation of the fact that the exciton bounces back and forth over the network thus preventing its efficient extraction . in the intermediate regimei.e. , when the different time scales of the system ( quantum oscillations , decoherence and trapping rate ) converge , the system shows high values of coherence on those time scales .the action of the bath has a _ quantum recoil avoiding effect _ on the dynamics of the excitation : the _ benefits of the fast initial quantum delocalization of the exciton over the network are preserved and sustained in time by the dynamics _ ; in terms of pathways leading to the exit site , the action is to s__electively kill the negative interference between pathways , while retaining the initial positive one . __ these effects can be explicitly connected to the overall efficiency of the environment - assisted quantum transport : the gain in efficiency for intermediate ( optimal ) values of decoherence can thus be traced back to the basic concepts of coherence and interference between pathways as expressed in the decoherent histories language .+ while the specific decoherence model used ( haken - strobl ) is an oversimplified description of the actual dynamics taking place in real systems , we believe that our analysis allows to spot out the essential features that may determine the high efficiency of the transport even in more complex system - environment scenarios .+ in conclusion , the tools introduced in this paper allow to thoroughly assess the coherence properties of quantum evolutions and can be applied to a large variety of quantum systems , the only limits being the restriction to markovian dynamics and the computational efforts required for high dimensional systems .however , the extension to non - markovian realms is indeed possible , and the use of parallel computing may allow the treatement of reasonably large systems .giorda and m. allegra would like to thank dr .giorgio `` giorgione '' villosio for his friendship , his support , his always reinvigorating optimism , and his warm hospitality at the institute for women and religion - turin , where this paper was completed ( _ cogitato , mus pusillus quam sit sapiens bestia , aetatem qui non cubili uni umquam committit suam , quin , si unum obsideatur , aliud iam perfugium elegerit _ ) .+ + p.giorda would like to thank prof .a. montorsi , prof .paris and prof .m. genovese for their kind help .+ + s. lloyd would like to thank m. gell - mann for helpful discussions .m. gell - mann and j. b. hartle , in _ complexity , entropy , and the physics of information _( addison - wesley , reading , massachusetts , 1990 ) ; in proc . of the 25th international conference on high energy physics , singapore ( world scientific , singapore , 1990 ) .h. park , n. heldman , p. rebentrost , l. abbondanza , a. iagatti , a. alessi , b. patrizi , m. salvalaggio , l. bussotti , m. mohseni , f. caruso , h. c. johnsen , r. fusco , p. foggi , p. f. scudo , s. lloyd & a. m. belcher , nature materials ( 2015 ) .h. haken , g. strobl , in _ the triplet state _ , proceedings or the international symposium , am .beirut , lebanon ( 1967 ) , a.b .zahlan , ed . , cambridge university press , cambridge ( 1967 ) ; h. haken , p. reineker , z. phys . * 249 * , 253 ( 1972 ) . | assessing the role of interference in natural and artificial quantum dyanamical processes is a crucial task in quantum information theory . to this aim , an appopriate formalism is provided by the decoherent histories framework . while this approach has been deeply explored from different theoretical perspectives , it still lacks of a comprehensive set of tools able to concisely quantify the amount of coherence developed by a given dynamics . in this paper we introduce and test different measures of the ( average ) coherence present in dissipative ( markovian ) quantum evolutions , at various time scales and for different levels of environmentally induced decoherence . in order to show the effectiveness of the introduced tools , we apply them to a paradigmatic quantum process where the role of coherence is being hotly debated : exciton transport in photosynthetic complexes . to spot out the essential features that may determine the performance of the transport we focus on a relevant trimeric subunit of the fmo complex and we use a simplified ( haken - strobl ) model for the system - bath interaction . our analysis illustrates how the high efficiency of environmentally assisted transport can be traced back to a _ quantum recoil avoiding effect _ on the exciton dynamics , that preserves and sustains the benefits of the initial fast quantum delocalization of the exciton over the network . indeed , for intermediate levels of decoherence , the bath is seen to selectively kill the negative interference between different exciton pathways , while retaining the initial positive one . the concepts and tools here developed show how the decoherent histories approach can be used to quantify the relation between coherence and efficiency in quantum dynamical processes . |
the present impact crater size frequency distribution , n is the result , on one hand , of a rate of crater formation , , and , on the other hand , the elimination of craters , as time goes by , due to effects like erosion and obliteration .therefore if we want to understand the crater formation history we will need to know how these forming and erasing factors combine to create .thus , in this work the above problem is analyzed , and in section 2 we find that n can be expressed in terms of and the fractional reduction of craters per unit of time , c. then , a simple model is discussed that describe the crater size distribution in mars data , collected by barlow , where it is assumed that is independent of time .the above model is realistic , since according to several investigations has remained nearly constant for the last 3 to 3.5 billion years .the simplest interpretation of this model implies that and are given as the following inverse power of the diameter , , of the crater : , . in section 3 the modelis applied to craters data on earths , and it is concluded that also in our planet .this result is interpreted to mean that on mars and earth we have , or equivalently the crater mean life , with .investigations of geometric properties of martian impact craters reflect values of the average height consistent with the above conclusion .in what follows we will present theoretical and analytical curves which will reproduce the essential features of the martian crater - size frequency distribution empirical curves ( figure 1 ) , based on barlow s ( 1987 ) of about 42,000 impact craters .the models will be derived using reasonable simple assumptions , that will allow us to relate the present crater population with the crater population at each particular epoch . to this end , let represents the number of craters of diameter formed during the epoch , where we are assuming that and are sufficiently large that is justified treating as a statistical continuous function , but , on the other hand , they should be sufficiently small ( , ) to be able to treat them as differentials in the following discussion . this initial population will change as time goes on due to climatic and geological erosion , and the obliteration of old craters by the formation of new ones .then , we expect that the change in during a time interval will be proportional to itself and : where c is the factor that takes into account the depletion of the craters , and should be a function of the diameter , since the smaller a crater is the most likely it will disappear . furthermore , c could also depend on time however , we will ignore such changes here , which we believe is a good starting approximation to the general problem .it is easy to integrate equation ( 1 ) in time to obtain : ,\ ] ] equation ( 2 ) gives the number of craters , as a function of , observed at time t , that were produced at the time interval .therefore the total contribution to the present ( t=0 ) population due to all the epochs is : ,\ ] ] or in the continuous limit , \,d{\tau},\ ] ] where is the rate of crater formation of diameter at the epoch , and is the total time of crater formation . in the next sectionwe will determine the function and for a model where we assumed that the rate of crater formation , , is independent of .investigations of the time dependance of cratering rate of meteorites have concluded that the impact rate went through a heavy bombardment era that decayed exponentially until about 3 to 3.5 gy , and since then has remained nearly constant until the present .therefore , for surfaces that are younger than 3 to 3.5 gy we can reasonably assume that is independent of , and hence from equation ( 5 ) immediately obtain .\ ] ] we then find that the simplest model that essentially reproduces the data in figure 1 , for km , is given by equations ( 8)and ( 9 ) : we see that the theoretical curve ( 7 ) , shown in figure(2 ) , differs significantly from the observed curves for less than about 6 km .however , according to barlow the empirical data is undercounting the actual crater population for less than km , and therefore no meaningful comparison is then possible between models and data for this region of small craters . equation ( 2 ) implies that the fraction of craters of diameter formed at each epoch that still survive at the present time is given by : \,\approx\,{\rm exp}\left[{-\left(\frac{57}{d}\right)}^{2.5}\,\frac{\tau}{\tau_f } \right]\ ] ] and thus we have that the mean life for craters of diameter , , is hence , craters with 57 km have , while the region km is approximately described by the limit of equation ( 7 ) when : which corresponds to a straight line of slope -4.3 in a log n vs log d plot , and that would be the form of equation ( 7 ) in the absence of erosion and obliterations ( ) .hence , we have that the bending of the empirical curve ( figure 1 ) for is explained in this model as the result of the elimination of smaller craters as they get older .we also see from equations(13 ) that when the effect of can be ignored we have , and therefore the actual crater density is proportional to the age of the underlying surface . on the other hand , when for smaller craters \,<<\,1 $ ] we will have from ( 7 ) that and in this limit the crater density is proportional to the survival mean life , , of the craters of size .thus , when saturation occurs and hence n is independent of , we have , instead , that is proportional to .this feature is called by hartmann ``crater retention age '' , and in mars this effect shows , according to this model , in craters smaller than about 57 km .the model given by equations ( 7 ) , ( 8) , and ( 9 ) assumed a simple polynomial form for and , however , alternative models can be also considered .for instance , by assuming that ,\ ] ] we will reproduce the mars crater data , exactly as in model given by equations ( 7),(8),(9 ) but now with , and the change in slope in figure ( 1 ) around km will now be interpreted as intrinsic behavior of rather than due to the erosion and obliteration of smaller craters .how can we then discriminate between these two alternative views ? .we see that in the model given by equation ( 7 ) the fraction of craters of a given diameter , , produced at a time , decreases with time according to equation ( 10 ) as ,\ ] ] while in the model of equation ( 15 ) this fraction is independent of time . therefore we can put to test the validity of equation ( 16 ) by studying crater size frequency distributions as a function of time .this is possible to do in our planet , and in this section we will investigate the consistency of the hypothesis ( 16 ) with the earth craters data . thus consider the average diameter of craters observed today that were formed during a given time , which is given , according to equation ( 16 ) , by assuming that and behave in the form we can rewrite equation ( 17 ) in the form ( appendix ) where and is the gamma function. equation ( 20 ) can be rewritten as which represents a linear relation between and with slope . in figure ( 3 ) we plot vs from data of crater size vs on earth , and the straight line best fitting gives , which is the value determined for model ( 7 ) for mars .this result is interpreted as follows .if we assume that , as expected , is a function of the volume of the crater , , that decreases with decreasing , then it is reasonable to expand it in terms of powers of , and thus we will have furthermore , for sufficiently small volumes we would have , as a good approximation to , that where we are writing with as the average height of the crater of size .the comparison of equation ( 25 ) with equations ( 19 ) , with , imply that which is a prediction that can be investigated , and we have found that indeed equation ( 27 ) is consistent with results from studies of impact crater geometric properties on the surface of mars , by j.b .garrin .therefore it appears that the age distribution of craters on earth favor the simple model considered for mars , where there is an erosion and obliteration factor with the approximate form it is also suggested here that the above behavior for follows from a relation of the form with further investigations and observations of the crater data on the terrestrial planets , the moon and the asteroids are necessary for additional tests of the validity of the model ( 7 ) and its interpretation .lets define barlow , n.g .1988 , , 75 , 285 hartmann w.k .1966b , , 5 , 406 neukum g. et al .2001 , chronology and evolution of mars 55 bern : international space science intitute neukum g. 1983 , ludwig - maximillians - university of munich 186 .ryder g. 1990 , eos , 71 , 313 hartmann w.k .2002 , lunar and planetary science xxxiii 1876 garrin j.b .2002 , lunar and planetary science xxxiii 1255 | we present a theoretical and analytical curve with reproduce essential features of the frequency distributions vs. diameter , of the 42,000 crater contained in the barlow mars catalog . the model is derived using reasonable simple assumptions that allow us to relate the present craters population with the craters population at each particular epoch . the model takes into consideration the reduction of the number of craters as a function of time caused by their erosion and obliteration , and this provides a simple and natural explanation for the presence of different slopes in the empirical log - log plot of number of craters ( n ) vs. diameter ( d ) . |
quantum finite automata ( qfa ) , as theoretical models for quantum computers with finite memory , have been explored by many researchers .so far , a variety of models of qfa have been introduced and explored to various degrees ( one can refer to a review article and references therein ) . among these qfa , there is a class of qfa that differ from others by consisting of two interactive components : a quantum component and a classical one .we call them _ semi - quantum automata _ in this paper. examples of semi - quantum automata are _ one - way qfa with control language _ ( cl-1qfa ) , _ one - way qfa together with classical states _( 1qfac ) , and _ one - way finite automata with quantum and classical states _( 1qcfa ) . here`` one - way '' means that the automaton s tape head is required to move right on scanning each tape cell .these semi - quantum automata have been proved to not only recognize all regular languages , but also show superiority over dfa with respect to descriptional power .for example , 1qcfa , cl-1qfa and 1qfac were all shown to be much smaller than dfa in accepting some languages ( resolving some promise problems ) .in addition , a lower bound on the size of 1qfac was given in , which stated that 1qfac can be at most exponentially more concise than dfa , and the bound was shown to be tight by giving some languages witnessing this exponential gap .size lower bounds were also reported for cl-1qfa in and for 1qcfa in ( no detailed proof was given in for the bound of 1qcfa ) , but they were not proved to be tight . by the way , we mention that the result obtained in that 1qfca recognize only regular languages follows directly from , although a relatively complex procedure was used in to deduce this result . specially , one can see that complex technical treatments were used in to derive the bound for cl-1qfa and one may find that some key steps in were confused such that the proof there may have some flaws , which will be explained more clearly in section 4 .it is also worth mentioning that the method used in is tailored for cl-1qfa and is not easy to adopt to other models .therefore , it is natural to ask : is there a uniform and simple method giving lower bounds on the size of the above three semi - quantum automata ? this is possible , as 1qcfa , cl-1qfa and 1qfac have the similar structure as shown in , where they were described in a uniform way : a semi - quantum automaton can be seen as a two - component communication systems comprising a quantum component and a classical one , and they differ from each other mainly in the specific communication pattern : classical - quantum , or quantum - classical , or two - way .it was also proved in that the three models can be simulated by the model of qfa with mixed states and trace - preserving quantum operations(referred as mo-1gqfa ) . in this paper , by using the above result, we present a uniform method that gives a lower bound on the size of 1qcfa , cl-1qfa and 1qfac , and this lower bound shows that they can be at most exponentially more concise than dfa .specifically , we first obtain a lower bound on the size of mo-1gqfa and then apply it to the three hybrid models by using the relationship between them and mo-1gqfa . compared with a recent work , our method is much more concise and universal , and it can be applied to the three existing main models of semi - quantum automata .in addition , our method may fix a potential mistake in that will be indicated later on .throughout this paper , for matrix ( operator ) , and denote the conjugate and conjugate - transpose of , respectively , and and denote the trace and rank of , respectively . according to von neumann s formalism of quantum mechanics , a quantum system is associated with a hilbert space which is called the state space of the system . in this paper , we only consider finite dimensional spaces .a ( mixed ) state of a quantum system is represented by a density operator on its state space . here a density operator on is a positive semi - definite linear operator such that .when , that is , for some , then is called a pure state .let and be the sets of linear operators and density operators on , respectively .a trace - preserving quantum operation on state space is a linear map from to itself that has an _ operator - sum representation _ as with the completeness condition , where are called operation elements of . a general measurement is described by a collection of measurement operators , where the index refers to the potential measurement outcome , satisfying the condition if this measurement is performed on a state , then the classical outcome is obtained with the probability , and the post - measurement state is for the case that is a pure state , that is , , we have and the state `` collapses '' into the state a special case of general measurements is the projective measurement where s are orthogonal projectors . has the singular value decomposition as follows : where , are called singular values of , and are two orthonormal sets . the trace norm of is defined as . by the singular value decomposition in ( [ svd ] ) , the trace norm can be characterized by singular values as note that if is positive semi - definite , then . for , the trace distance between them is the trace distance between two probability distributions and is recall results about the trace distance from as follows .let and be two density operators .then we have 1 . for any trace - preserving quantum operation .2 . where , and the maximization is over all povms .[ lm - distance ] a linear mapping which maps a matrix to a -dimensional column vector is defined as follows : in other words , is the vector obtained by taking the rows of , transposing them to form column vectors , and stacking those column vectors on top of one another to form a single vector .for example , we have if we let be an -dimensional column vector with the entry being 1 and else 0 s , then form a basis of .therefore , the mapping can also be defined as follows : for any , it is easy to verify in this paper , the norm of is defined by . for , we observe the following relation between the two norms and .let and .then we have [lm - norm ] _ proof ._ suppose has the singular value decomposition .then we have thus we have on the other hand , by the cauchy - schwarz inequality we have definitions of automata ----------------------- in the literature , there exist some hybrid models of qfa that differ from other qfa models by consisting of two interactive components : a quantum component and a classical one .we call them _ semi - quantum automata _ in this paper .as shown in , a semi - quantum automaton can be depicted in fig .[ model ] , where an automaton comprises a quantum component , a classical component , a classical communication channel , and a classical tape head ( that is , the tape head is regulated by the classical component ) . on scanning an input symbol , the quantum and classical components interact to evolve into new states , during which communication may occur between them . in this paper, we focus on automata with a one - way tape head , that is , after scanning an input symbol the model moves its tape head one cell right . as shown in ,there are three models of semi - quantum automata fitting into fig .[ model ] , with the essential difference being the specific communication pattern : * in cl-1qfa , only quantum - classical communication is allowed , that is , the quantum component sends its measurement result to the classical component , but no reverse communication is permitted . * in 1qfac , only classical - quantum communication is allowed , that is , the classical component sends its current state to the quantum component . * in 1qcfa , two - way communication is allowed : ( 1 ) first , the classical component sends its current state to the quantum component ; ( 2 ) second , the quantum component sends its measurement result to the classical component . in the following , we recall the detailed definitions of the existing models of semi - quantum automata . one of such models is called _ one - way qfa with control language _( cl-1qfa) , defined as follows . a cl-1qfa is a 7-tuple where is a finite set of quantum basis states , is a finite alphabet , is a finite set of symbols ( measurement outcomes ) , is the initial quantum state , is a unitary operator for each , is a projective measurement given by a collection of projectors , and is a regular language ( called a control language).[df : cl - qfa ] in cl-1qfa , on scanning a symbol , a unitary operator followed by the projective measurement is performed on its current state . thus , given an input string , the computation produces a sequence of measurement results with a certain probability that is given by where we define the ordered product .the input is said to be _ accepted _ if belongs to a fixed regular language .thus the probability of accepting is recently , qiu et al proposed a new model named _1qfa together with classical states _ ( 1qfac ) , defined as follows .a 1qfac is defined by a 8-tuple where and are finite sets of quantum basis states and classical states , respectively , is a finite input alphabet , and are initial quantum and classical states , respectively , is a unitary operator on for each and , is a classical transition function , and for each , is a projective measurement given by projectors where the two outcomes and denote acceptance and rejection , respectively .the machine starts with the initial states and .on scanning an input symbol , is first applied to the current quantum state , where is the current classical state ; afterwards , the classical state changes to . finally ,when the whole input string is finished , a measurement determined by the last classical state is performed on the last quantum state , and the input is accepted if the outcome is observed .therefore , the probability of 1qfac accepting is given by where for .ambainis and watrous proposed the model of _ two - way qfa with quantum and classical states _ ( 2qcfa ) .as proved in , 2qcfa can recognize non - regular language in polynomial time and the palindrome language in exponential time , which shows the superiority of 2qcfa over their classical counterparts . in the following we recall 1qcfa , a one - way variant of 2qcfa .note that in this paper the notion of 1qcfa is slightly more general than the one in .the reason for why we adopt the current definition is that it has a more succinct form which simplifies some notations ( for example , in our version we need give only the set of general measurements , instead of two sets : unitary operators and projective measurements ) .it is , however , worthwhile to emphasize that all results obtained in this paper hold surely for the model in .a 1qcfa is specified by a 9-tuple where and are finite sets of quantum and classical states , respectively , is a finite input alphabet , is a finite set of symbols ( measurement outcomes ) , and are initial quantum and classical states , respectively , for each and is a general measurement on with outcome set , specifies the classical state transition , and denotes a set of accepting states.[df-1qcfa ] on scanning a symbol , at first the general measurement , determined by the current classical state and the scanned symbol , is performed on the current quantum state , producing some outcome ; then the classical state changes to by reading and . after scanning all input symbols , checks whether its classical state is in .if yes , the input is accepted ; otherwise , rejected .therefore , the probability of 1qcfa accepting is given by where : * is defined by * are measurement operators of .* for . in fig.[model ] , let be the set of basis states of the quantum component and be the set of states of the classical component .let and .then we say that the semi - quantum automaton has quantum basisi states and classical states . for cl-1qfa, denotes the set of states of the minimal dfa accepting the control language .recall the model of mo-1gqfa which has mixed states and trace - preserving quantum operation as follows .an mo-1gqfa is a five - tuple , where is a finite - dimensional hilbert space , is a finite input alphabet , , the initial state of , is a density operator on , corresponding to is a trace - preserving quantum operation acting on , is a projector on the subspace called accepting subspace of .denote , then form a projective measurement on .let .then we call is an -dimensional mo-1gqfa . on the input word , the above mo-1gqfa proceeds as follows : the quantum operations are performed on in succession , and then the projective measurement is performed on the final state , obtaining the accepting result with a certain probability .thus , mo-1gqfa defined above induces a function ] , if holds for all and holds for all .the cut - point is said to be _ isolated _ whenever there exists ] , where stands for .denote then we have on the other hand , for any , we have , the first equality holds because is positive semi - definite , and the second equality holds because the operations used are trace - preserving . in summary, we obtain the following two properties : * for any , lies in the unit sphere in . * for any two strings satisfying , we always have consists of equivalence classes , say ,[x_2],\cdots , [ x_d] ] , we have {n}n^2}= \left(\frac{2}{\delta}\right)^ { 2n^{\frac{9}{4}}},\end{aligned}\ ] ] where the second inequality holds because holds for any and ] , although it was claimed to be ^{\frac{4}{9}}$ ] ( the factor is from the fact the volume of a sphere of a radius in is , instead of , where depends only on ) .we have presented a uniform method for obtaining the lower bound on the size of cl-1qfa , 1qfac and 1qcfa , and this bound shows that these automata can be at most exponentially smaller than dfa .compared with a recent work , our method is much more concise and universal , and it is applicable to the three existing main models of semi - quantum automata . note that although our lower bound is universal , it is not necessarily optimal .for instance , a better lower bound was obtained for 1qfac in .thus , a natural open problem remains either to witness the optimality of our size lower bound for some specific model , or to improve it .the authors are thankful to dr .shenggen zheng for his useful comments . , _quantum computing : 1-way quantum automata _ , in proceedings of the 9th international conference on developments in language theory , lecture notes in comput .2710 , springer - verlag , berlin , 2003 , pp . 1 - 20 . | in the literature , there exist several interesting hybrid models of finite automata which have both quantum and classical states . we call them semi - quantum automata . in this paper , we compare the descriptional power of these models with that of dfa . specifically , we present a uniform method that gives a lower bound on the size of the three existing main models of semi - quantum automata , and this bound shows that semi - quantum automata can be at most exponentially more concise than dfa . compared with a recent work ( bianchi , mereghetti , palano , theoret . comput . sci . , 551(2014 ) , 102 - 115 ) , our method shows the following two advantages : ( i ) our method is much more concise ; and ( ii ) our method is universal , since it is applicable to the three existing main models of semi - quantum automata , instead of only a specific model . |
in this paper we present a reliable but compact approximation to the high temperature cooling coefficient of heavy elements in diffuse plasmas under non - equilibrium conditions .such an approximation is essential for use with multi - dimensional hydrodynamics codes where the additional burden of following the details of ionization evolution severely restricts the available spatial resolution . with only a few exceptions , large hydrodynamic models that incorporate radiative coolingcharacterize the cooling coefficient with a single parameter , the temperature , , where the total emissivity per unit volume is and and are the electron and hydrogen densities .these cooling functions are determined by assuming either that the ionization state at a given temperature is characterized by collisional equilibrium , or that all gas follows a particular pre - calculated ionization history ( shapiro & moore 1976 ; edgar & chevalier 1983 ; sutherland & dopita 1993 ) . because the cooling of a plasma depends on the ionization history of the constituent ions , therecan actually be a large range in the value of at a given , depending upon the details of the ionization evolution .we demonstrate , however , that cooling due to trace elements ( those heavier than helium ) can be approximated by following the evolution of just a single additional parameter , the mean charge on the trace ions , .here , z is the element number , z is the ionic charge , is the ( linear ) abundance of element z relative to hydrogen , and is the concentration of a given ion ( the fraction of element with charge ) .because the abundance is dominated by oxygen , ranges from zero to about nine .our method generalizes from a cooling curve , , to a cooling plane .it consequently requires another function that allows us to update the ionization level , . with the mean charge as our ionization level indicator, is the difference between the mean ionization and recombination functions .thus , where and , and where and are the ionization and recombination rate coefficients , respectively , at for stage of element .the corresponding cooling coefficient is , where is the cooling coefficient per ion . to implement this approximation we search for a reasonable description of the nonequilibrium ionization concentrations , , representative of those found at and assumes that under actual nonequilibrium conditions , the distribution over ion states will depend less on the details of the past history than on how far the present mean charge differs from the collisional equilibrium value at the current temperature . in this paper, we consider the ionization distributions that arise in isothermal relaxation to equilibrium for each temperature , starting with nearly fully ionized or fully neutral gas .the atomic data used to develop the manifolds comes from raymond & smith ( 1977 ) , with updates described in raymond & cox ( 1985 ) , corrections in oscillator strengths in the cooling transitions of li - like ions ( shull & slavin 1994 ) , and revised dielectronic recombination rates of romanik ( 1988 ) .the abundances are taken from anders & grevesse ( 1989 ) .the effects of charge exchange are not included ; errors introduced by neglecting this are smaller than the uncertainties in the atomic data and elemental abundances used to generate the cooling curve . [ secintro ]we have performed the manifold of non - equilibrium isothermal evolutions described below to form the basis for our approximation , producing tables of the cooling coefficient of trace elements and our ionization evolution functions and .gas is initialized with equilibrium ionization appropriate to some initial temperature .its temperature is then suddenly changed to and held fixed as the ionization evolves to equilibrium . by choosing to be low ( e.g. , k ) , and performing the calculation for a dense set of temperatures between and k , the runs sample the full range of conditions possible in under - ionized gases . by repeating the whole set once again with very high ( e.g. k ), conditions representing over - ionized gases are explored .together these cases sample the full range of . a particular moment in an evolutioncan be characterized by , , and the fluence ( ) .the ionization state is known , allowing straightforward evaluation of the mean charge and then using as the index of the time evolution .evolution at constant temperature makes it easy to acquire the tables of , , , and on a fixed grid of and .[ top left panel ] , the mean ionization rate coefficient , [ top right panel ] , the mean recombination rate coefficient , [ bottom right panel ] , and the absolute value of the ionization evolution function , [ bottom left panel ] , as a function of electron temperature and mean charge . the dotted line in each panel shows the mean charge in collisional equilibrium as a function of gas temperature , .the function passes through zero along this line . above the equlibrium curve ,the gas is recombining and d ( , ) is negative ; below , it is ionizing and d ( , ) is positive .these functions were calculated for a grid of isothermal evolutions .gas was either started with ionization fractions corresponding to k or k. it then ionized up or recombined down to at fixed temperature .the resultant rate coefficients were calculated versus for each for the range . ]the cooling function shown in figure 1 shows a huge peak at k and due to collisional excitation of low stages of ionization .it falls rapidly to the left due to boltzmann factors , and gradually to the right due to decreasing excitation cross - sections .it falls at higher as fewer bound electrons with low enough energy are available for collisional excitation . above the equilibrium linethe gas is over - ionized and collisional excitation is difficult , particularly at lower temperatures . at a given ,the mean ionization function , i , behaves much like the collisionally excited cooling , with vagaries at the transition from to 2 , and at where the mean ionization rate is sensitive to the relative proportions of helium- and lithium - like oxygen .the structure of the mean recombination rate , r , is more gradual . in the upper left cornerit is dominated by radiative recombination and the gradual increase with and decrease with temperature are as expected .deviations from that smooth pattern in the rest of the diagram are due to dielectronic recombination . in both i and r , there are distortions in the patterns just above the equilibrium line .these arise because , in relaxing to equilibrium , a recombining plasma goes through a considerable compaction of its ionization distribution .the principal feature of the rate function is that ionization toward equilibrium from below is much faster than recombination toward it from above . with increasing distance from equilibrium , the ionization rate increases dramatically , the recombination rate only gradually .to test the accuracy of our approximation for a wide range of situations , a representative set of cases were examined in which single parcels of gas in collisional equilibrium at k were suddenly shock heated and then subjected to varying degrees of expansion .the test situation was modeled on the sedov blast wave solution for an explosion of energy ergs into a homogeneous medium of particle density . by varying the preshock density and the assumed distance of the parcel from the explosion site, we adjusted the post shock temperature and the timescales for depressurization versus radiative cooling .these scenarios test the two behaviors in which drastic departures from equilibrium occur : the rapid ionization of gas passing through a shock front and the rapid adiabatic cooling of an over - ionized plasma .each parcel s evolution can be fully characterized by and .we have approximated , with being the time the parcel is shocked , and determining the rate of decompression .the temperature evolution is then where is the particle density of hydrogen ( ionized and neutral ) .we have carried out each simulation twice , first , solving the full ionization balance evolution exactly , then using our cooling approximation . in the latter ,we solve the ionization evolution for hydrogen , helium , and ( four rate equations ) .the cooling is the sum of that from hydrogen , helium , and our tabulated cooling function . in order to test the widest possible range of conditions , we chose three values of , with three different values of for each .the values of density were spaced to alter the initial ratio of depressurization to radiative cooling from approximately 0.4 at the high density end , to 6 and then 4000 at the lower densities . for the densest runs ,the structure is very similar to that of a steady state radiative shock . for the next lower density cases ,depressurization has strong effects but cooling is still sufficiently rapid that the density eventually rises rather than falling with time . at the lowest densities , the density and temperature fall adiabatically and the ionization level is soon frozen in .( dashed ) , ( dash - dot ) , and ( solid ) ; the central panel shows `` mixed '' evolution with ( dashed ) , ( dash - dot ) , and ( solid ) ; the right panel shows radiative cooling dominated evolution with ( dashed ) , ( dash - dot ) , and ( solid ) .the time evolution for each curve is from right to left .the cooling curve for collisional ionization equilibrium is also shown ( dotted ) .the middle set of panels shows the ratio of the approximation for trace element cooling to the exact value for each of the above cases . for `` mixed '' and `` radiative '' evolution ,the highest temperature cases are sufficiently similar that the curves overlie each other .the lowest set of panels show the ionization evolution for these cases in the t- plane .the solid curves show the exact evolution for the nine cases above , while the dashed lines show the slight departures that result by using the approximation we present .the dotted line in each panel shows the mean charge in collisional equilibrium as a function of gas temperature , , as in figure 1 . ]figure 2 shows the exact trace element cooling coefficient evolutions versus temperature , the ratio of our approximate results to the exact ones , and the evolutions of mean charge , both exact and approximate . in each case, the cooling function begins very high just after the shock , when the gas is still briefly nearly neutral . at the two higher temperatures in the middle and right hand panels , the gas rapidly ionizes up andthe cooling function drops precipitously to very close to the equilibrium value .this ion flash " is so rapid that the integrated cooling during this period is negligible , as evidenced by the nearly vertical tracks .thereafter , these tracks follow the equilibrium curve down to just below , after which their recombination can not keep up with the cooling and the cooling rate falls below the equilibrium value . in the lowest temperature cases ,significant cooling takes place before the cooling function drops through equilibrium , after which it plunges toward the nonequilibrium curve of the initially higher temperature cases .the differences between the radiation dominated and mixed evolutions are almost negligible . in the expansion dominated cases , however , things are quite different .the temperature drops so rapidly that the gas is soon over - ionized ; recombination is slower than the temperature drop , and the ionization level declines only gradually or is frozen in .the middle row of panels shows the ratio of approximate to exact cooling coefficients . in the two right hand panels , the approximation is good to within about 10% except below where it is consistently high by about 40% , and in a peak at where it can be as much as 20 to 30% high . at this temperature ,the approximate evolutions of the two higher temperature cases in the two right lower panels of figure 2 diverge by charge states from the exact results .this is due in part to the excess cooling , but the recombination rate is almost certainly slightly too low as well . for the rapidly depressurizing cases , radiative cooling is inconsequential .nevertheless , the cooling coefficient which differs substantially from the other cases , is still reasonably well fit , except for the intermediate temperature case for which gets frozen at a value of 5.5 . in that case , the value of is well represented by the approximation , but the approximate cooling coefficient is too high .the freezing in of the ionization is extremely well matched , except very late in the highest initial temperature case .we have also compared the time evolution of density , temperature , mean charge , and the cooling coefficients for the exact calculation , our approximation , and the case where the trace elements are assumed to be in collisional ionization equilibrium .while the time evolution of temperature and density of the exact calculations and our approximation are in reasonably good agreement , the case with collisional ionization equilibrium shows significant differences in the time evolution of temperature . in addition, the ionization equilibrium assumption provides a poor estimate of the true ionization level of the gas , especially for the expansion dominated cases .we provide a means by which large hydrocodes can include an accurate approximation to the radiative cooling coefficient , one far more responsive to the vagaries of dynamical environments than any single function of temperature .the method also follows the mean ionization level of the gas .both the trace element cooling coefficient , and the rate of change of depend only on the identification of a manifold of representative ionization concentrations found at a particular temperature and mean charge .the ionization concentrations we used came from the isothermal relaxations of both highly over - ionized and highly under - ionized gas . in the test cases presented ,both the cooling coefficient and mean charge evolution are quite accurately approximated .for all cases in which radiative cooling is significant , the approximation never errs by more than 30% .the error is usually less than 10 to 15% , well within the accuracy of the true cooling coefficient , which is limited by uncertainties in atomic data and abundances .the error in the cooling coefficient can be somewhat larger in examples with extreme amounts of depressurization , but never by enough to make the negligible radiative cooling appear to be significant .our approximate cooling coefficients and charge evolution rates , however , showed patterns in their modest errors , patterns which we believe we understand and can eliminate with future work .should a potential user wait for these improvements before beginning to implement this method ?the current model provides an excellent approximation for the cooling coefficient . if one is interested only in dynamics , it is certainly sufficient .implementation of the next generation requires only swapping one set of tables for another .our future work will also examine the possibility that the ion distributions can be used to provide absorption and emission spectra in post processing .it is possible to do this at the two parameter level , but higher order corrections will also be examined .the latter are expected to be two to four times more complex , and will be pursued only if spectral accuracy requires it .an additional consideration for gas with and is the effect of photoionization on the ionization balance and therefore the cooling .our scheme can be easily modified to incorporate photoionization of hydrogen and helium .however , the effects of photoionization of trace elements would require a table of photoionization correction factors for and .this too will be considered in future refinements .we would like to thank john raymond for compiling and providing much of the atomic data that went into this work , and for a very careful reading of the manuscript .we would also like to thank nasa astrophysical theory grant nag5 - 8417 for financial support of this work . andfinally , we would like to acknowledge the valuable contributions of leo krzewina , angela klohs , andrew pawl , and tim freyer who all expended some effort on cracking this problem . | radiative cooling is an important ingredient in hydrodynamical models involving evolution of high temperature plasmas . unfortunately , calculating an accurate cooling coefficient generally requires the solution of over a hundred differential equations to follow the ionization . we discuss here a simple 2-parameter approximation for the cooling coefficient due to elements heavier than h and he , for the temperature range . tests of the method show that it successfully tracks the ionization level in severe dynamical environments , and accurately approximates the non - equilibrium cooling coefficient of the trace elements , usually to within 10% in all cases for which cooling is actually important . the error is large only when the temperature is dropping so rapidly due to expansion that radiative cooling is negligible , but even in this situation , the ionization level is followed sufficiently accurately . the current approximation is fully implemented in publicly available fortran code . a second paper will discuss general approaches to approximation methods of this type , other realizations which could be even more accurate , and the potential for extension to calculations of non - equilibrium spectra . |
quantum information processing as a growing exciting field has attracted researchers from different disciplines .it utilizes the laws of quantum mechanical operations to perform exponentially speedy computations . in an open system, one might wonder how to perform such computations in the presence of decoherence and noise that disturb quantum states storing quantum information .ultimately , the goals of quantum error - correcting codes are to protect quantum states and to allow recovery of quantum information processed in computational operations of a quantum computer .henceforth , one seeks to design good quantum codes that can be efficiently utilized for these goals . a well - known approach to derive quantum error - correcting codes from self - orthogonal ( or dual - containing ) classical codesis called stabilizer codes , which were introduced a decade ago .the stabilizer codes inherit some properties of clifford group theory , i.e. , they are stabilized by abelian finite groups . in the seminal paper by calderbank _ at . , various methods of stabilizer code constructions are given , along with their propagation rules and tables of upper bounds on their parameters . in a similar tactic , we also present subsystem code structures by establishing several methods to derive them easily from classical codes .subsystem codes inherit their name from the fact that the quantum codes are decomposed into two systems as explained in section [ sec : background ] .the classes of subsystem codes that we will derive are superior because they can be encoded and decoded using linear shirt - register operations .in addition , some of these classes turned out to be optimal and mds codes .subsystem codes as we prefer to call them were mentioned in the unpublished work by knill , in which he attempted to generalize the theory of quantum error - correcting codes into subsystem codes .such codes with their stabilizer formalism were reintroduced recently .an subsystem code is a -dimensional subspace of that is decomposed into a tensor product of a -dimensional vector space and an -dimensional vector space such that all errors of weight less than can be detected by .the vector spaces and are respectively called the subsystem and the co - subsystem . for some background on subsystem codes see the next section .this paper is structured as follows . in section[ sec : background ] , we present a brief background on subsystem code structures and present the euclidean and hermitian constructions . in section [ sec : cyclicsubsys ] , we derive cyclic subsystem codes and provide two generic methods of their constructions from classical cyclic codes . consequently in section [ sec : dimensions ] , we construct families of subsystem bch and rs codes from classical bch and rs over and defined using their defining sets . in sections [ sec :mdssubsys],[sec : extendshortensubsys],[sec : combinesubsys ] , we establish various methods of subsystem code constructions by extending and shortening the code lengths and combining pairs of known codes , in addition , tables of upper bounds on subsystem code parameters are given . finally , the paper is concluded with a discussion and future research directions in section [ sec : conclusion ]. _ notation ._ if is a set , then denotes the cardinality of the set .let be a power of a prime integer .we denote by the finite field with elements .we use the notation to denote the concatenation of two vectors and in .the symplectic weight of is defined as we define for any nonempty subset of . the trace - symplectic product of two vectors and in is defined as where denotes the dot product and denotes the trace from to the subfield .the trace - symplectic dual of a code is defined as we define the euclidean inner product and the euclidean dual of as we also define the hermitian inner product for vectors in as and the hermitian dual of as this section we give a quick overview of subsystem codes .we assume that the reader is familiar the theory of stabilizer codes over finite fields , see and the references therein .let denote a finite field with elements of characteristic .let be a fixed orthonormal basis of with respect to the standard hermitian inner product , called the computational basis . for , we define the unitary operators and on by where is a primitive root of unity and is the trace operation from to .the set forms an orthogonal basis of the operators acting on with respect to the trace inner product , called the error basis .the state space of quantum digits ( or qudits ) is given by .an error basis on is obtained by tensoring operators in ; more explicitly , where for and .the set is not closed under multiplication , whence it is not a group .the group generated by is given by and is called the error group of .the error group is an extraspecial -group .the weight of an error in is given by the number of nonidentity tensor components ; hence , the weight of is given by the symplectic weight .an subsystem code is a subspace of that is decomposed into a tensor product of two vector spaces and of dimension and such that all errors in of weight less than can be detected by .we call the subsystem and the co - subsystem . the information is exclusively encoded in the subsystem .this yields the attractive feature that errors affecting co - subsystem alone can be ignored . a particularly fruitful way to construct subsystem codes proceeds by choosing a normal subgroup of the error group , andthis choice determines the dimensions of subsystem and co - subsystem as well as the error detection and correction capabilities of the subsystem code , see .one can relate the normal subgroup to a classical code , namely modulo the intersection of with the center of yields the classical code .this generalizes the familiar case of stabilizer codes , where is an abelian normal subgroup .it is remarkable that in the case of subsystem codes _ any _ classical additive code can occur .it is most convenient that one can also start with any classical additive code and obtain a subsystem code , as is detailed in the following theorem from : [ th : oqecfq ] let be a classical additive subcode of such that and let denote its subcode . if and , then there exists a subsystem code such that , .the minimum distance of subsystem is given by if ; if .thus , the subsystem can detect all errors in of weight less than , and can correct all errors in of weight .see ( * ? ? ?* theorem 5 ) . a subsystem code that is derived with the help of the previous theoremis called a clifford subsystem code .we will assume throughout this paper that all subsystem codes are clifford subsystem codes .in particular , this means that the existence of an subsystem code implies the existence of an additive code with subcode such that , , and . a subsystem code derived from an additive classical code called pure to if there is no element of symplectic weight less than in .a subsystem code is called pure if it is pure to the minimum distance .we require that an subsystem code must be pure .we also use the bracket notation ] subsystem code has gauge qudits , but this terminology is slightly confusing , as the co - subsystem typically does not correspond to a state space of qudits except perhaps in trivial cases .we will avoid this misleading terminology .an subsystem code is also an stabilizer code and vice versa .subsystem codes can be constructed from the classical codes over and .we recall the euclidean and hermitian constructions from , which are easy consequences of the previous theorem .[ lem : css - euclidean - subsys ] if is a -dimensional -linear code of length that has a -dimensional subcode and , then there exists an \ ] ] subsystem code .[ lem : css - hermitina - subsys ] if is a -dimensional -linear code of length that has a -dimensional subcode and , then there exists an \ ] ] subsystem code .in this section we shall derive subsystem codes from classical cyclic codes .we first recall some definitions before embarking on the construction of subsystem codes . for further details concerning cyclic codes see for instance and .let be a positive integer and a finite field with elements such that . recall that a linear code is called _ cyclic _ if and only if in implies that in . for in ]. let denote the vector space isomorphism / ( x^n-1) ] of the least degree such that is called the _ generator polynomial _ of .if is a cyclic code with generator polynomial , then since , there exists a primitive root of unity over ; that is , ] .therefore , the generator polynomial of a cyclic code can be uniquely specified in terms of a subset of such that the set is called the _ defining set _ of the cyclic code ( with respect to the primitive root of unity ) . since is a polynomial in ] .since is a self - orthogonal cyclic code , we have , whence by lemma [ lem : definingsets ] iii ) .observe that if is an element of the set , then is an element of as well .in particular , is a subset of . by definition ,the cyclic code has the defining set ; thus , the dual code has the defining set furthermore , we have therefore , by lemma [ lem : definingsets ] i ) . since and , we have and .thus , by lemma [ lem : css - euclidean - subsys ] there exists an -linear subsystem code with parameters ] subsystem code . since , their defining sets satisfy by lemma [ lem : definingsets ] iii ) . if is an element of , then one easily verifies that is an element of .let . since the cyclic code has the defining set , its dual code has the defining set we notice that thus , by lemma [ lem : definingsets ] i ) . since and , we have and .thus , by lemma [ lem : css - hermitina - subsys ] there exists an ] the general principle behind the previous example yields the following simple recipe for the construction of subsystem codes : choose a cyclic code ( such as a bch or reed - solomon code ) with known lower bound on the minimum distance that contains its ( hermitian ) dual code , and use proposition [ lem : cyclic - subsysi ] ( or proposition [ lem : cyclic - subsysii ] ) to derive subsystem codes .this approach allows one to control the minimum distance of the subsystem code , since is guaranteed .another advantage is that one can exploit the cyclic structure in encoding and decoding algorithms .for example , if we start with primitive , narrow - sense bch codes , then proposition [ lem : cyclic - subsysi ] yields the following family of subsystem codes : consider a primitive , narrow - sense bch code of length with over with designed distance in the range .\ ] ] if is a subset of that is a union of cyclotomic cosets and with , where , then there exists an \ ] ] subsystem code . by* theorem 2 ) , a primitive , narrow - sense bch code with designed distance in the range ( [ eq : ddistrange ] ) satisfies . by (* theorem 7 ) , the dimension of is given by , whence .let and respectively denote the defining sets of and .it follows from the definitions that and that is a subset of if denotes the defining set of a cyclic code , then . by proposition [ lem : cyclic - subsysi ], there exists an ] subsystem code with that is pure to , then there exists an -linear ] subsystem code exists , then there exists an -linear ] subsystem code with , then there exists a pure -linear ] subsystem code .however , there does not exist any ] stabilizer code can also be regarded as an ] stabilizer code that is pure to , then there exists for all in the range an ( -linear ) ] subsystem code exists , then a pure ( -linear ) ] .then there exists a subsystem bch code with parameters ] , then there exists a stabilizer code with parameters ] .we can also construct subsystem bch codes from stabilizer codes using the hermitian constructions .[ lem : bchexistfq2 ] if is a power of a prime , is a positive integer , and is an integer in the range }-1 -(q^2 - 2)[m \textup { even}] ] , then exists a classical bch code with parameters ] subsystem code derived from an -linear classical code satisfies the singleton bound , see ( * ? ? ?* theorem 3.6 ) .a subsystem code attaining the singleton bound with equality is called an mds subsystem code .an important consequence of the previous theorems is the following simple observation which yields an easy construction of subsystem codes that are optimal among the -linear clifford subsystem codes .[ th : puremds ] if there exists an -linear ] mds subsystem code for all in the range . an mds stabilizer code must be pure , see ( * ? ? ?* theorem 2 ) or ( * ? ? ?* corollary 60 ) . by corollary [ cor : generic ] ,a pure -linear ] subsystem code that is pure to for any in the range .since the stabilizer code is mds , we have . bythe singleton bound , the parameters of the resulting -linear ] mds subsystem code exists for all , , and such that , , and . b. an -linear pure ] mds subsystem code exists for all and such that and .d. an -linear pure ] mds subsystem code exists for all and in the range and . f. an -linear pure ] stabilizer codes for all and such that and .the claim follows from theorem [ th : puremds ] .+ by ( * ? ? ?* theorem 5 ) , there exist a ] and ] and ] subsystem codes for all prime powers , ] stabilizer code . if the syndrome calculation is simpler , then such subsystem codes could be of practical value . the subsystem codes given in ii)-vi ) of the previous corollary are constructively established . the subsystem codes in ii )are derived from reed - muller codes , and in iii)-vi ) from reed - solomon codes .there exists an overlap between the parameters given in ii ) and in iv ) , but we list here both , since each code construction has its own merits . by theorem[ th : fqshrinkr ] , pure mds subsystem codes can always be derived from mds stabilizer codes , see table [ table : optimalmds ] .therefore , one can derive in fact all possible parameter sets of pure mds subsystem codes with the help of theorem [ th : puremds ] . in the case of stabilizer codes, all mds codes must be pure .for subsystem codes this is not true , as the ] mds subsystem codes with is a particularly interesting challenge . &+ ] + ] + ] + ] + ] + ] + ] + ] + ] + ] + ] + ] + ] + ] + ] + ] + ] + + * punctured code + extended code recall that a pure subsystem code is called perfect if and only if it attains the hamming bound with equality .we conclude this section with the following consequence of theorem [ th : puremds ] : if there exists an -linear pure ] perfect subsystem code for all in the range .in section [ sec : dimensions ] , we showed how one can derive new subsystem codes from known ones by modifying the dimension of the subsystem and co - subsystem . in this section ,we derive new subsystem codes from known ones by extending and shortening the length of the code .[ lemma_n+1k ] if there exists an clifford subsystem code with , then there exists an subsystem code that is pure to 1 .we first note that for any additive subcode , we can define an additive code by we have . furthermore , if , then is contained in for all in , whence . by comparing cardinalitieswe find that equality must hold ; in other words , we have by theorem [ th : oqecfq ] , there are two additive codes and associated with an clifford subsystem code such that and we can derive from the code two new additive codes of length over , namely and . the codes and determine a clifford subsystem code .since we have .furthermore , we have .it follows from theorem [ th : oqecfq ] that , , .since contains a vector of weight , the resulting subsystem code is pure to 1 .if there exists an ] subsystem code that is pure to 1. we can also shorten the length of a subsystem code in a simple way as shown in the following theorem .[ lem : n-1k+1rule ] if a pure subsystem code exists , then there exists a pure subsystem code . by ( * ?* lemma 10 ) , the existence of a pure clifford subsystem code with parameters implies the existence of a pure stabilizer code .it follows from ( * ? ? ?* lemma 70 ) that there exist a pure stabilizer code , which can be regarded as a pure subsystem code .thus , there exists a pure subsystem code by theorem [ th : shrinkr ] , which proves the claim . in bracketnotation , the previous theorem states that the existence of a pure ] subsystem code .in this section , we show how one can obtain a new subsystem code by combining two given subsystem codes in various ways .[ thm : twocodes_n1k1r1d1n2k2r2d2 ] if there exists a pure ] subsystem code such that , then there exist subsystem codes with parameters \ ] ] for all in the range , where the minimum distance .since there exist pure ] subsystem codes with , it follows from theorem [ th : shrinkr ] that there exist stabilizer codes with the parameters ] such that .therefore , there exists an ] subsystem codes for all in the range .[ lem : twocodes_nk1r1s1k2r2d2 ] let and be two pure subsystem codes with parameters ] , respectively .if , then there exists pure subsystem codes with parameters \ ] ] for all in the range , where the minimum distance . by assumption, there exists a pure ] stabilizer code by theorem [ th : shrinkr ] , where . by (* lemma 74 ) , there exists a pure stabilizer code with parameters ] for all in the range , which proves the claim . further analysis of propagation rules of subsystem code constructions , tables of upper and lower bounds , and short subsystem codes are presented in .subsystem codes are among the most versatile tools in quantum error - correction , since they allow one to combine the passive error - correction found in decoherence free subspaces and noiseless subsystems with the active error - control methods of quantum error - correcting codes . in this paperwe demonstrate several methods of subsystem code constructions over binary and nonbinary fields .the subclass of clifford subsystem codes that was studied in this paper is of particular interest because of the close connection to classical error - correcting codes . as theorem[ th : oqecfq ] shows , one can derive from each additive code over an clifford subsystem code .this offers more flexibility than the slightly rigid framework of stabilizer codes .we showed that any -linear mds stabilizer code yields a series of pure -linear mds subsystem codes .these codes are known to be optimal among the -linear clifford subsystem codes .we conjecture that the singleton bound holds in general for subsystem codes .there is quite some evidence for this fact , as pure clifford subsystem codes and -linear clifford subsystem codes are known to obey this bound .we have established a number of subsystem code constructions .in particular , we have shown how one can derive subsystem codes from stabilizer codes . in combination with the propagation rulesthat we have derived , one can easily create tables with the best known subsystem codes .further propagation rules and examples of such tables are given in , and will appear in an expanded version of this paper .this research was supported by nsf grant ccf-0622201 and nsf career award ccf-0347310 .part of this paper is appeared in proceedings of 2008 ieee international symposium on information theory , isit08 , toronto , ca , july 2008 .s. a. aly , a. klappenecker , and p. k. sarvepalli .primitive quatnum bch codes over finite fields . in _ proc .2006 ieee international symposium on information theory , seattle , usa _ , pages 1114 1118 , july 2006 .let be a basis of .then is an element of ; hence , . let it follows from this definition that and that . furthermore ,if and are elements of with in , then on the right hand side , all terms but the last are in ; hence we must have , which shows that , whence .expanding in the basis yields a code , and we must have equality by a dimension argument .since the basis expansion is isometric , it follows that the -linearity of is a direct consequence of the definition of . | subsystem codes protect quantum information by encoding it in a tensor factor of a subspace of the physical state space . subsystem codes generalize all major quantum error protection schemes , and therefore are especially versatile . this paper introduces numerous constructions of subsystem codes . it is shown how one can derive subsystem codes from classical cyclic codes . methods to trade the dimensions of subsystem and co - subystem are introduced that maintain or improve the minimum distance . as a consequence , many optimal subsystem codes are obtained . furthermore , it is shown how given subsystem codes can be extended , shortened , or combined to yield new subsystem codes . these subsystem code constructions are used to derive tables of upper and lower bounds on the subsystem code parameters . |
spinal cord injury , brain computer interface , virtual reality environment , electroencephalogram , kinesthetic motor imagery , gait , ambulation , locomotion .spinal cord injury ( sci ) can leave the affected individuals with paraparesis or paraplegia , thus rendering them unable to ambulate .since there are currently no restorative treatments for this population , technological approaches have been sought to substitute for the lost motor functions .examples include robotic exoskeletons , functional electrical stimulation ( fes ) systems , and spinal cord stimulators .however , these systems lack the able - body - like supraspinal control , and so the ambulation function of these devices is controlled manually .in addition to being unintuitive , these systems may be costly and cumbersome to use , and therefore have not yet garnered popular appeal and adoption among potential sci users . due to these limitations ,wheelchairs remain the primary means of mobility after sci .unfortunately , the extended reliance on wheelchairs typically lead to a wide variety of comorbidities that constitute the bulk of chronic sci - related medical care costs .consequently , to address the above issues associated with the treatment of paraparesis and paraplegia after sci , novel brain - controlled prostheses are currently being pursued .recent results by our group suggest that an electroencephalogram ( eeg ) based brain - computer interface ( bci ) controlled lower extremity prosthesis may be feasible .more specifically , these studies demonstrated the successful implementation of a bci system that controls the ambulation of an avatar ( a stand - in for a lower extremity prosthesis ) within a virtual reality environment ( vre ) . by using a data - driven machine learning approach to decode the users kinesthetic motor imageries ( kmis ) , this bci - controlled walking simulator enabled a small group of subjects ( one with paraplegia due to sci ) to achieve intuitive and purposeful bci control after a short training session .while the single sci subject outperformed most able - bodied subjects in this study , the operability of this system has not yet been tested in a sci population .the successful implementation of the bci - controlled walking simulator in a population of subjects with sci will establish the feasibility of future bci - controlled lower extremity prostheses and will represent an important step toward developing novel gait rehabilitation strategies for sci . extending the application of the bci - controlled walking simulator to a sci populationis faced with several problems .first , cortical reorganization , which is common after sci , may cause the cortical representation of walking kmi to vary vastly from one sci subject to another .second , this representation may dramatically evolve over time when sci subjects are engaged in kmi training .finally , subjects with sci may interpret walking kmi either as motor imagery or as attempted walking , which in turn may result in multiple patterns of cortical activation across these individuals .therefore , intuitive bci operation under these conditions requires a system that can accommodate for the variations of brain physiology across sci individuals , time , and strategies . to address these problems, we used a data - driven machine learning method to decode walking kmis in a small population of sci individuals .this approach enabled 5 subjects to achieve intuitive and self - paced operation of the bci - controlled walking simulator after only minimal training .furthermore , they were able to maintain this level of control over the course of several weeks .the goal of this study was to determine if individuals with complete motor sci can use intuitive control strategies to purposefully operate a bci - controlled walking simulator . to achieve this goal ,5 subjects with sci underwent a short training procedure where they performed alternating epochs of idling and walking kmi while their eeg were recorded .these training eeg data were then analyzed to build decoding models for subsequent online bci operation . to ascertain purposeful bci control ,subjects then performed 5 sessions of an online bci goal - oriented virtual walking task .this entire procedure was performed 5 times over the course of several weeks to determine if subjects performances improved with additional practice .this study was approved by the university of california , irvine institutional review board .four subjects with paraplegia and one with tetraplegia due to sci were recruited via physician referral from the long beach veterans affairs spinal cord injury center and other sci outreach programs . the subjects ( see table 1 ) gave their informed consent to participate in the study .note that all subjects were bci nave and most of them performed the experimental procedures at a rate of once per week ..list of participants with demographic data and level of sci .asia = american spinal injury association impairment scale . [ cols="^,^,^,<",options="header " , ]the results of this study show that subjects with paraplegia or tetraplegia due to sci can operate a non - invasive bci - controlled avatar within a vre to accomplish a goal - oriented ambulation task .all subjects gained purposeful online bci control on the first day after undergoing a 10-min training session , with the exception of subject 2 , who did not attain control until day 2 .in addition , bci control was maintained and continued to improve over the course of the study .these findings suggest that a bci - controlled lower extremity prosthesis for either gait rehabilitation or restoration may be feasible .the offline classification accuracies varied across subjects and experimental days , but were significantly above the chance level performance ( 50% ) . similar to able - bodied subjects engaged in the same task , a short 10-min training session was sufficient for the data - driven machine learning algorithm to generate accurate subject - specific eeg decoding models for this population .the topographic maps of these models ( e.g. fig . 3 and fig .4 ) showed that the spatio - spectral features underlying the differences between walking and idling kmis varied across subjects and evolved over experimental days .the differences in the brain areas and eeg frequencies across subjects may be due to variations in cortical reorganization following sci , or due to differences in imageries employed by each subject ( e.g. the kmi of walking instructions may have been interpreted differently by each subject ) . nevertheless , all subjects showed activation of mid - frontal areas , which likely overlay the pre - motor and supplementary motor areas , as well as the pre - frontal cortex .their activation during walking kmi is consistent with functional imaging findings , such as those in .another common pattern across subjects was the presence of activity near bilateral , lateral central - parietal electrodes , which likely represents the arm sensorimotor areas .a similar pattern was observed in able - bodied individuals , and is hypothesized to originate from arm swing imagery .finally , the evolution of the feature extraction maps over the 5 experimental days may be indicative of a neuro - plasticity process associated with practice and learning .the spatio - temporal variations of walking kmi activation patterns demonstrate the necessity of a data - driven machine learning approach for rapid acquisition of intuitive bci control .first , our approach accommodates for the variations of these activity patterns across subjects , as well as their evolution over time .second , it facilitates rapid acquisition of online bci control , presumably by enabling subjects to utilize intuitive mental strategies .the user training time necessary to acquire purposeful bci control in this study is significantly shorter than those of other bci studies where users must learn a completely new cognitive skill to modulate pre - selected eeg features , such as the -rhythm over lateral central areas .finally , this approach carries a significant potential value in the future practical implementation of bci - prostheses , as it may drastically reduce the training time needed to attain purposeful and useful bci control from a timescale of weeks to months to one of minutes to days .this in turn may significantly reduce the cost of training users to operate future bci - prostheses .the results presented in table 4 show that once purposeful control was achieved , it was maintained in 96% of all online sessions . in addition , 3 out of 5 subjects achieved successful stop scores similar to those obtained using a manually controlled joystick .even though no subjects were able to complete the course as fast as manual control , it is encouraging that the average composite scores increased significantly over the course of the study .furthermore , the average composite scores ( table 5 ) improved over time , with the best scores approaching 100% for subjects 3 and 5 by the end of the study . therefore , not only was online control significantly different from random walk , but it was also meaningful . given this trend ,additional training and practice may help further improve performance , possibly to the point of approaching that of the manually controlled joystick . in conclusion, the high level of online control achieved by sci subjects over the course of 5 experimental days suggests that it may be feasible to apply this bci system to control a lower extremity prosthesis for ambulation after sci .furthermore , the proposed bci - vre system may serve as a training platform for operation of bci lower extremity prostheses once they become widely available .this study shows that sci subjects can purposefully operate a self - paced bci - vre system in real time , and that the current bci design approach is able to overcome the potential problems associated with variations in neurophysiology due to cortical reorganization after sci , learning and plasticity processes , and differences in kmi strategies .furthermore , the system satisfies the requirements of an ideal bci - lower extremity prosthesis set forth in , namely : intuitiveness , robustness , and short training time .the operation of the system is intuitive as it enabled subjects to use walking kmi to control the ambulation of the avatar .the system is robust in that the data - driven decoding methodology was able to successfully accommodate for subject - to - subject and day - to - day variations in the neurophysiological underpinnings of idling and walking kmi behaviors .in addition , subjects were able to maintain purposeful online control over the course of several weeks , further underscoring the system s robustness over time .finally , the system required only a short training time , as bci control was generally attained after only a 10-min training data collection procedure followed by a 2-min calibration session on the 1 experimental day ( for 4 out of the 5 subjects ) .the successful outcome of this study indicates that bci - controlled lower extremity prostheses for gait rehabilitation or restoration after sci may be feasible in the future .sci , spinal cord injury ; bci , brain - computer interface ; vre , virtual reality environment ; kmi , kinesthetic motor imagery ; eeg , electroencephalogram ; fes , functional electrical stimulation ; emg , electromyogram ; fft , fast fourier transform ; cpca , classwise principal component analysis ; lda , fisher s linear discriminant analysis ; aida , approximate information discriminant analysis ; 1d , one - dimensional ; cv , cross - validation ; npc , non - player character ; mc , monte carlo ; pdf , probability density function ; asia , american spinal injury association .cek received salary from hrl laboratories , llc ( malibu , ca ) .the remaining authors declare that they have no competing interests .cek carried out the experiments , collected and analyzed the data , and wrote the article .ptw programmed the brain - computer interface software , assisted with carrying out the experiments , collecting the data , and analyzing the data .lac contributed to conception of the study .ahd conceived and designed the study , implemented the vre , recruited and consented subjects , supervised the experiments , and co - wrote and proofread the article .zn conceived and designed the study , designed the signal processing , pattern recognition , and classification algorithms , and co - wrote and proofead the article .all authors read and approved the final manuscript .this study was funded by the roman reed spinal cord injury research fund of california ( rr 08 - 258 and rr 10 - 281 ) , and was partially funded by the long beach va advanced fellowship research award . | [ [ background ] ] background : + + + + + + + + + + + spinal cord injury ( sci ) can leave the affected individuals with paraparesis or paraplegia , thus rendering them unable to ambulate . since there are currently no restorative treatments for this population , novel approaches such as brain - controlled prostheses have been sought . our recent studies show that a brain - computer interface ( bci ) can be used to control ambulation within a virtual reality environment ( vre ) , suggesting that a bci - controlled lower extremity prosthesis for ambulation may be feasible . however , the operability of our bci has not yet been tested in a sci population . [ [ methods ] ] methods : + + + + + + + + five subjects with paraplegia or tetraplegia due to sci underwent a 10-min training session in which they alternated between kinesthetic motor imagery ( kmi ) of idling and walking while their electroencephalogram ( eeg ) were recorded . subjects then performed a goal - oriented online task , where they utilized kmi to control the linear ambulation of an avatar while making 10 sequential stops at designated points within the vre . multiple online trials were performed in a single day , and this procedure was repeated across 5 experimental days . [ [ results ] ] results : + + + + + + + + classification accuracy of idling and walking was estimated offline and ranged from 60.5% ( p=0.0176 ) to 92.3% ( p=1.36 ) across subjects and days . offline analysis revealed that the activation of mid - frontal areas mostly in the and low bands was the most consistent feature for differentiating between idling and walking kmi . in the online task , subjects achieved an average performance of 7.4.3 successful stops in 273 sec . these performances were purposeful , i.e. significantly different from the random walk monte carlo simulations ( p.01 ) , and all but one subject achieved purposeful control within the first day of the experiments . finally , all subjects were able to maintain purposeful control throughout the study , and their online performances improved over time . [ [ conclusions ] ] conclusions : + + + + + + + + + + + + the results of this study demonstrate that sci subjects can purposefully operate a self - paced bci walking simulator to complete a goal - oriented ambulation task . the operation of the proposed bci system requires short training , is intuitive , and robust against subject - to - subject and day - to - day neurophysiological variations . these findings indicate that bci - controlled lower extremity prostheses for gait rehabilitation or restoration after sci may be feasible in the future . [ 1995/12/01 ] |
in typical phase i studies in the development of relatively benign drugs , the drug is initiated at low doses and subsequently escalated to show safety at a level where some positive response occurs , and healthy volunteers are used as study subjects .this paradigm does not work for diseases like cancer , for which a non - negligible probability of severe toxic reaction has to be accepted to give the patient some chance of a favorable response to the treatment .moreover , in many such situations , the benefits of a new therapy may not be known for a long time after enrollment , but toxicities manifest themselves in a relatively short time period .therefore , patients ( rather than healthy volunteers ) are used as study subjects , and given the hoped - for ( rather than observed ) benefit for them , one aims at an acceptable level of toxic response in determining the dose .current designs for phase i cancer trials , which are sequential in nature , are an ad hoc attempt to reconcile the objective of finding a _ maximum tolerated dose _ ( mtd ) with stringent ethical demands for protecting the study subjects from toxicities in excess of what they can tolerate .it treats groups of three patients sequentially , starting with the smallest of an ordered set of doses .escalation occurs if no toxicity is observed in all three patients ; otherwise an additional three patients are treated at the same dose level .if only one of the six patients has toxicity , escalation again continues ; otherwise the trial stops , with the lower dose declared as mtd . as pointed out by storer ( ) , these designs , commonly referred to as 3-plus-3 designs , are difficult to analyze , since even a strict quantitative definition of mtd is lacking , `` although it should be taken to mean some percentile of a tolerance distribution with respect to some objective definition of clinical toxicity , '' and the `` implicitly intended '' percentile seems to be the 33rd percentile ( related to 2 ) .storer ( ) also considered three other `` up - and - down '' sequential designs for quantile estimation in the bioassay literature and performed simulationstudies of their performance in estimating the 33rd percentile .subsequent simulation studies byoquigley et al .( ) showed the performance of these designs to be `` dismal , '' for which they provided the following explanation : `` not only do ( these designs ) not make efficient use of accumulated data , they make use of no such data at all , beyond say the previous three , or sometimes six , responses . ''they proposed an alternative design , called the _ continual reassessment method _ ( crm ) , which uses parametric modeling of the dose response relationship and a bayesian approach to estimate the mtd or , more generally , the dose level such that the probability of a toxic event is ( in the case of mtd ) . letting and assuming the usual logistic model for the probability of a toxic response at dose level , the problem of optimal choice of dose levels to estimate the mtd seems to be covered by the theory of nonlinear designs .a well - known difficulty in nonlinear design theory is that the optimal design for parameter estimation involves the unknown parameter vector . to circumvent the difficulty, it has been proposed that the design be constructed sequentially , using observations made to date to estimate by maximum likelihood and choosing the next design point by using the mle to replace the unknown parameter value in the optimal design ; see fedorov ( ) .if is known , then a target probability of response is attained at the level that solves , that is , /\beta ] of possible dose values believed to contain the mtd , with believed to be a conservative starting value . rather than directly specifying the prior distribution for the unknown parameter of the working model to be used in the second stage , which may be hard for investigators to do in practice , an upper bound on the probability of toxicity at can be elicited from investigators ; uniform distributions over ] are then taken as the prior distributions for the mtd and , respectively .let denote the information set generated by the first doses and responses , that is , by .letting denote the mtd , it is convenient to transform from the unknown parameters in the two - parameter logistic model ( [ 1 ] ) to via the formulas \label{3}\beta&=&\frac{\log(1/\rho-1)-\log(1/p-1)}{\eta - x_{\min}},\vspace*{-2pt}\end{aligned}\ ] ] giving & & { } -(x - x_{\min } ) \log(1/p-1)\bigr)\nonumber \\[-8pt]\\[-8pt ] & & /(\eta - x_{\min})\nonumber \\[-2pt ] & = & \psi(x,\rho,\eta).\nonumber\vspace*{-2pt}\end{aligned}\ ] ] assuming that the joint prior distribution of has density with support on \times [ x_{\min},x_{\max}] ] , in which where and =\int_{x_{\min}}^{x_{\max } } h(x,\eta ) f(\eta|\mathcal{f}_k)\,d\eta.\ ] ] since the information about the dose toxicity relationship gained from and the response affects the ability to safely and effectively dose the other patients , one potential weakness of these myopic policies is that they may be inadequate in generating information on for treating the rest of the patients , as well as the post - experimental estimate of the mtd for subsequent phases . to incorporate these considerations in aphase i trial, should be chosen sequentially in such a way as to minimize the _ global risk _ ,\ ] ] in which the expectation is taken over the joint distribution of .note that ( [ 8 ] ) measures the effect of the dose on the patient through , its effect on future patients in the trial through , and its effect on the post - trial estimate through .it can therefore be used to address the dilemma between safe treatment of current patients in the study and efficient experimentation to gather information about for future patients . as noted in section [ sec1 ] , lai and robbins ( )have introduced a similar global risk function to address the dilemma between information and control in the choice of in the linear regression model so that the outputs , , are as close as possible to some target value .specifically , they consider ( [ 8 ] ) with and .dynamic programming is a standard approach to a stochastic optimization problem of the form ( [ 8 ] ) .define , \qquad 0\leq k < n-1 , \cr e[h(x,\eta ) \cr { } \quad + g(\hat{\eta}(x_1,\ldots , x_{n-1},x),\eta ) |\mathcal{f}_{n-1 } ] , \cr \qquad k = n-1 . }\hspace*{-20pt } % \ ] ] to minimize ( [ 8 ] ) , dynamic programming solves for the optimal design by backward induction that determines by minimizing \ ] ] after determining the future dose levels . note that ( [ 10 ] ) involves computing the conditional expectation of given the dose at stage and the information set , and that is determined by minimizing such conditional expectation over all . for , since is a complicated nonlinear function of the past observations and of that are not yet observed , evaluation of the aforementioned conditional expectation is a formidable task . to overcome this difficulty, we use recent advances in approximate dynamic programming , which we first review and then extend and modify for the problem of minimizing the global risk ( [ 8 ] ) . to begin with , consider the problem of minimizing ( [ 8 ] ) with and in the linear regression model with i.i.d .normal errors having mean . assuming a normal prior distribution of , the posterior distribution of given is also bivariate normal with parameters , in which denotes conditional expectation given .these conditional moments have explicit recursive formulas ; see section 4 of han , lai and spivakovsky ( ) .the myopic policy that chooses at stage to minimize ] , bayard ( ) showed that , regardless of the base policy , rolling out times yields the optimal design and that rolling out always improves the base design , that is , that \\[-8pt ] & \ge & r\bigl(\hat{\mathbf{x}}^{(n)}\bigr)=r(\mathbf{x}^*)\nonumber\end{aligned}\ ] ] for any policy , where denotes the optimal policy . for the global risk function ( [ 8 ] ) associated with phase i designs , with given by ( [ 7 ] ), one can use the myopic design ewoc or crm as the base design in the rollout procedure .in contrast with the explicit formula ( [ 11 ] ) for the case of a linear regression model with normal errors , the posterior distribution with density function ( [ 5 ] ) does not have finite - dimensional sufficient statistics and the myopic design involves ( a ) bivariate numerical integration to evaluate \ ] ] for , and ( b ) minimization of the conditional expectation over .the simulation studies in [ sec:2stage ] , in which the rollout is implemented with ewoc as the base design , show substantial improvements of the rollout over ewoc and crm .although ( [ 13 ] ) says that rolling out a base design can improve it and rolling out times yields the dynamic programming solution , in practice , it is difficult to use a rollout ( which is defined by a backward induction algorithm that involves monte carlo simulations followed by numerical optimization at every stage ) as the base policy for another rollout . to overcome this difficulty , we need a tractable representation of successive rollouts , which we develop by using other ideas from approximate dynamic programming ( adp ) . the conditional expectation in ( [ 10 ] ) , as a function of , is called the _ cost - to - go function _ in dynamic programming .an adp method , which grew out of the machine learning ( or , more specifically , reinforcement learning ) literature , is based on two statistical concepts concerning the conditional expectation .first , for given and the past information , the conditional expectation is an expectation and therefore can be evaluated by monte carlo simulations , if one knows how are generated .the second concept is that , by ( [ 9 ] ) , is a conditional expectation given , which is a regression function ( or minimum - variance prediction ) of , with regressors ( or predictors ) generated from .based on a large sample ( generated by monte carlo ) , the regression function can be estimated by least squares using basis function approximations , as is typically done in nonparametric regression .combining least squares ( ls ) regression with monte carlo ( mc ) simulations yields the following ls - mc method for markov decision problems in reinforcement learning .let be a markov chain whose transition probabilities from state to depend on the action at time , and let denote the cost function at time , incurred when the state is and the is taken .consider the statistical decision problem of choosing at each stage to minimize the cost - to - go function assuming that have been determined .let these functions can be evaluated by the backward induction algorithm of dynamic programming: , and for , \\[-8pt ] & & { } \hspace*{19pt}+e[v_{k+1}(s_{k+1})|s_k = s , x_k = x]\},\nonumber\end{aligned}\ ] ] in which the minimizer yields .the ls - mc method uses basis functions , , to approximate by , and uses this approximation together with monte carlo simulations to approximate \ ] ] for every in a grid of representative values .this yields an approximation to and also to .moreover , using the sample generated by the control action , we can perform least squares regression of on to approximate by .further details of this approach can be found in chapter 6 of bertsekas ( ) .although the problem ( [ 10 ] ) can be viewed as a markov decision problem with the -posterior distribution being the state , the state space of the markov chain at hand is infinite - dimensional , consisting of all bivariate posterior distributions of the unknown parameter vector .if the state space were finite - dimensional , for example , , then one could approximate the value functions ( [ eq : argmin ] ) by commonly used basis functions in nonparametric regression , such as regression splines and their tensor products ; see hastie , tibshirani and friedman ( ) .however , in the infinite - dimensional case , there is no such simple choice of basis functions of posterior distributions , which are the states .as pointed out in section 6.7 of bertsekas ( ) , an alternative to approximating the value functions , called _ approximation in value space _ , is to approximate the optimal policy by a parametric family of policies so that the total cost can be optimized over the parameter vector .this approach is called _ approximation in policy space _ andmost of its literature has focused on finite - state markov decision problems and gradient - type optimization methods that approximate the derivatives of the costs , as functions of the parameter vector , by simulation .we now describe a new method for approximation in policy space , which uses iterated rollouts to optimize the parameters in a suitably chosen parametric family of policies. the choice of the family of policies should involve domain knowledge and reflect the kind of policies that one would like to use for the actual application .one would therefore start with a set of real - valued basis functions of the state of the markov chain with general , possibly infinitely - dimensional , state space , on which the family of chosen policies will be based .the control policies in this family can be represented by , which is the action taken at time [ after has been observed and the basis functions have been evaluated ] and in which is a parameter to be chosen iteratively by using successive rollouts , with being the base policy for the rollout . using the simulated sample in which denotes the simulated replicate of , least squares regression of on performed to estimate by ; nonlinear least squares is used if is nonlinear in . in view of ([ 13 ] ) , each iteration is expected to provide improvements over the preceding one .a concrete example of this method in a prototypical phase i setting is given in the next section , where linear regression splines are used in iterated rollouts . in this setting the state variable represents the complete treatment history up to time in the trial all prior distributions , doses and responses up to that time and the cost function will be replaced by given by ( [ 9 ] ) .in their use of rollouts to approximate the optimum for ( [ 8 ] ) for the normal model , han , lai and spivakovsky ( ) , section 3 , used the structure of their problem to come up with an ingenious `` perturbation of the myopic rule '' as a base policy to improve the performance of the rollout , without performing second- or higher - order rollouts . in this sectionwe explore this technique in the context of phase i designs , using such perturbations called here _ hybrid designs_both as base policies and as a way to represent highly complicated but efficient policies in a simple , clinically useful way . as pointed out in section [ sec : globrisk ] , the objective function of the dynamic programming problem ( [ 8 ] ) involves both experimentation ( for estimating the mtd ) and treatment ( for the patients in the study ) .consider the patient in a trial of length ( ) .if the patient were the last patient to be treated in the trial ( ) , the best dose to give him / her would be the myopic dose that minimizes , given by ( [ 9 ] ) .on the other hand , early on in the trial , especially if is relatively large , one expects the optimal dose to be perturbed from in the direction of a dose that provides more information about the dose response model , for the relatively large number of doses that will have to be set for the future patients .since the optimal design theory for learning the mtd under overdose constraints , developed by haines , perevozskaya and rosenberger ( ) , yields a - or -optimal design , we propose to use the following _ hybrid design _ representation of the optimal dose sequence : where is the chosen `` learning design . '' of course , any dosing policy admits the representation ( [ 14 ] ) with however , we will show that it is possible to use rollouts to choose of a simple form , not depending on , such that the resulting hybrid design given by the right - hand side of ( [ 14 ] ) is highly efficient .similar ideas have been used in `` -greedy policies '' in reinforcement learning ( sutton and barto , , page 122 ) . from our simulation studies that include the example in section [ sec : sim ] , we have found that the sequential -optimal design ( haines et al ., , section 5 ) with being the vector works well for learning design in ( [ 14 ] ) , which we now briefly explain . in general ,optimal designs such as - and -optimal can be characterized as optimizing some convex loss function of the information matrix associated with the parameter value and a measure on the space of design points ( see fedorov , ) . here is interpreted as the asymptotic variance of the mle of .the optimization problem can be generalized to the sequential bayes setting , with prior distribution on , by finding the that minimizes \pi(\theta|\mathcal { f}_{k-1})\,d\theta\ ] ] at the stage , where is the empirical measure of the previous design points . in the case , ( [ eq : infint ] ) is replaced by \pi(\theta)\,d\theta ] . taking the bayesian -optimal design with as the learningdesign in ( [ 14 ] ) gives , hence , this design is optimal , in some sense , for learning about or , equivalently , about the slope of the dose response curve ( [ 1 ] ) at the mtd , for which is or some other prespecified value .this has the following connections to the stochastic optimization problem of lai and robbins ( ) discussed in section [ sec1 ] and to the rollout procedure of han , lai and spivakovsky ( ) . for the normal model discussed in section [ sec1 ] and as an asymptotic limiting case of other models , sacks ( ) showed that the optimal value of the step size ( a user - supplied parameter in the lai robbins procedure affecting its convergence rate ) is proportional to .moreover , han , lai and spivakovsky ( ) , section 3 , found that in the normal model , perturbations of the myopic policy in the direction of this -optimal design provide a base design for a rollout that has comparable performance to that of an `` oracle policy . ''since the treatment versus experimentationdilemma discussed in section [ sec2 ] stems from the uncertainty in the current estimate of the mtd , it is natural to expect that the amount of perturbation from the myopic dose depends on the degree of such uncertainty , using little perturbation when the posterior distribution of is peaked , and much more perturbation when it is spread out .this suggests choosing as a function of the posterior variance , whose reciprocal is called the `` precision '' of in bayesian parlance . following the approach described in section [ sec : lsmc ] , we use functions of as basic features of the posterior distribution of to approximate the in ( [ 14 ] ) .to begin , monte carlo simulations are performed to obtain the rollout of ewoc , yielding a simulated sample , } , where is the simulated replicate of which is essentially the same as ( [ 14 ] ) with replaced by .the basic idea in section [ sec : lsmc ] can be implemented via nonparametric regression of on , yielding the estimated regression function .letting , the hybrid design can then be used as the base policy to form the rollout , and this procedure can be repeated to obtain the iterated rollouts linear regression splines , and their tensor products for multivariate regressors , provide a convenient choice of basis functions ; see section 9.4 of hastie , tibshirani and friedman ( ) .for the present problem , it suffices to use a truncated linear function \\[-8pt ] \eqntext{\mbox{for } s_*\le s\le s^*,}\end{aligned}\ ] ] where and are the minimum and maximum of the sample values , , and to extend beyond the range ] is transformed to ] and ] .rolling out ewoc as the base design and using simulations , the preceding procedure gave . putting in the hybrid design we used as the base policy of a second rollout , for which the preceding procedure yielded .here we used the sequential -optimal design with ' ] .figure [ fig : rk ] plots the cumulative risk ] , hence , the joint uniform distribution on \times[x_{\min},x_{\max}] ] and ] . the second uses 20 uniformly - spaced dose levels and is denoted by . besides ewoc and its rollout roll , the bayesian designs include crm , the constrained -optimal design ( abbreviated by -opt ) of haines et al .( ) with constraint and the unconstrained sequential bayesian -optimal design ( abbreviated by -opt ) with being the vector .the prior density is assumed to be uniform : ^{-1}\nonumber \\[-8pt]\\[-8pt ] & & { } \cdot1\{(\rho,\eta ) \in[0,q]\times[x_{\min},x_{\max}]\}\nonumber\end{aligned}\ ] ] with , where denotes the indicator of a set .the values of were generated from the prior distribution ( [ eq : unifprior ] ) .the performance of these designs is first evaluated in terms of the global risk ( [ 8 ] ) , in which we use the squared error for the mtd estimate .we then evaluate performance exclusively in terms of the bias and root mean squared error ( rmse ) of without taking into consideration the risk to current patients , noting that the - and -optimal designs focus on errors of post - trial parameter estimates .finally , since safety of the patients in the trial is the primary concern of traditional 3-plus-3 designs , performance is also evaluated in terms of the dlt rate and the probability of overdose ( i.e. , dose level exceeding the mtd ) .each result in table [ table:2stage ] is based on 2000 simulations .the results in table [ table:2stage ] show that the effects of considering the `` future '' patients is large , with roll and hybrid 1 substantially reducing the global risk from the myopic designs : in the case of roll , about 30% from ewoc , 35% from crm , and more from the 3-plus-3 , - and -opt , sa and wu designs .although roll has somewhat smaller global risk than hybrid 1 , it is computationally much more expensive , as noted above .the results for the 3-plus-3 designs show that they are highly sensitive to the choice of in ( [ eq : levels ] ) .the design , using 10 uniformly - spaced levels in ] .that is , the data are generated with fixed at the 15th percentile of ] , with as in table [ table:2stage ] .the nominal prior for used by the bayesian procedures in table [ table : misspec ] is ( [ eq : unifprior ] ) , the same as in table [ table:2stage ] , as are the values of the other parameters .to see the effects of the first stage of more conservative dose escalation , the operating characteristics of roll are recomputed using a first stage of length ; the dose levels ( [ eq : levels ] ) used by the modified 3-plus-3 design are 10 uniformly - spaced levels in =[140,425] ] , extending the model considered abovewhere the support of is bounded above by .they note that priors in this class with a negative correlation structure between and result in an ewoc design with comparable accuracy for estimating the mtd but lower dlt and od rates , relative to its performance for priors supported on \times[x_{\min},x_{\max}]$ ] .as noted in section [ sec:2stage ] , a two - stage design can easily address the higher dlt and od rates caused by misspecifications of such priors . on the other hand , even without a cautious first stage , the above and other generalizations of the prior of can be seamlessly incorporated into our hybrid design .in fact , the model of tighiouart et al .( ) , which has been shown to perform well in their simulation studies , has a left - truncated , hierarchical normal prior distribution on , so the rejection sampling approach in the last paragraph of section [ sec : sim ] can be applied here by using , say , the exponential distribution as the instrumental distribution , since its tails are upper bounds of those of the normal distribution .we can therefore still use the monte carlo approach laid out at the end of section [ sec : sim ] .bartroff s work was supported by nsf grant dms-0907241 and lai s work was supported by nsf grant dms-0805879 . | optimal design of a phase i cancer trial can be formulated as a stochastic optimization problem . by making use of recent advances in approximate dynamic programming to tackle the problem , we develop an approximation of the bayesian optimal design . the resulting design is a convex combination of a `` treatment '' design , such as babb et al.s ( ) escalation with overdose control , and a `` learning '' design , such as haines et al.s ( ) -optimal design , thus directly addressing the treatment versus experimentation dilemma inherent in phase i trials and providing a simple and intuitive design for clinical use . computational details are given and the proposed design is compared to existing designs in a simulation study . the design can also be readily modified to include a first stage that cautiously escalates doses similarly to traditional nonparametric step - up / down schemes , while validating the bayesian parametric model for the efficient model - based design in the second stage . |
let denote the space of _ square _ matrices with real - valued coefficients , and the matrix vector space of _ symmetric _ matrices .a matrix is said _symmetric positive definite _ ( spd , denoted by ) iff . and only _ symmetric positive semi - definite_. ] ( spsd , denoted by ) when we relax the strict inequality ( ) .let denote the space of positive semi - definite matrices , and denote the space of positive definite matrices .a matrix is defined by real coefficients , and so is a spd or a spsd matrix .although is a _ vector space _ , the spsd matrix space does not have the vector space structure but is rather an abstract _ pointed convex cone _ with _apex _ the zero matrix since .symmetric matrices can be _ partially _ ordered using the _ lwner ordering _ : and . when , matrix is said to _dominate _ matrix , or equivalently that matrix is dominated by matrix .note that the difference of two spsd matrices may not be a spsd matrix . and then and . ] a non - spsd symmetric matrix can be dominated by a spsd matrix when .is dominated by ( by taking the absolute values of the eigenvalues of ) . ]the _ supremum _ operator is defined on symmetric matrices ( not necessarily spsds ) as follows : , x\succeq s_i \},\ ] ] where =\{1 , ... , n\} ] .in plain words , dominates a set of matrices iff .its associated dominance cone covers all the dominance cones for ] ( with ) : ^\top\in \bbr^(\frac{d(d+1)}{2}) ] , the _ trace operator _ is defined by , the sum of the diagonal elements of the matrix .the trace also amounts to the sum of the eigenvalues of matrix : . the basis of a dominance cone is .note that all the basis of the dominance cones lie in the _ subspace _ of symmetric matrices with zero trace .let denote the _ matrix inner product _ and the matrix _frbenius norm_. two matrices and are orthogonal ( or perpendicular ) iff .it can be checked that the identity matrix is perpendicular to any zero - trace matrix since .the center of the ball basis of the dominance cone is obtained as the _ orthogonal projection _ of onto the zero - trace subspace : .the dominance cone basis is a _ matrix ball _ since for any rank- matrix with ( an extreme point ) , we have the radius : that is non - negative since we assumed that .reciprocally , to a basis ball , we can associate the apex of its corresponding dominance cone : .figure [ fig : coneprojection ] illustrates the notations and the representation of a cone by its corresponding basis and apex .thus we associate to each dominance cone its corresponding ball basis on the subspace of zero trace matrices : , .we have the following containment relationships : and finally , we transform this minimum enclosing _ matrix _ ball problem into a minimum enclosing _ vector _ ball problem using a half - vectorization that preserves the notion of distances , _i.e. _ , using an isomorphism between the space of symmetric matrices and the space of half - vectorized matrices .the -norm of the vectorized matrix should match the matrix frbenius norm : .since , it follows that ^\top \in\bbr^{\frac{d(d+1)}{2}} ] where denotes the identity matrix .recall that . by construction ,the transformed input set satisfies ] for . in 3d , we use spherical coordinates ^\top$ ] for and . ] the extreme x points for .this yields an approximation term , requires more computation , and even worse the method does not scale in high - dimensions . thus in order to handle high - dimensional matrices met in software formal verification or in computer vision ( structure tensor ) , we consider -approximation of the extremal lwner matrices .the notion of tightness of approximation of ( the epsilon ) is imported straightforwardly from the definition of the tightness of the geometric covering problems .a -approximation of is a matrix such that : .it follows from eq .[ eq : matrad ] that a -approximation satisfies .we present a fast guaranteed approximation algorithm for approximating the minimum enclosing ball of a set of balls ( or more generally , for sets of compact geometric objects ) .[ cols= " < , < " , ]our novel extremal matrix approximation method allows one to leverage further related results related to core - sets for dealing with high - dimensional extremal matrices . for example, we may consider clustering psd matrices with respect to lwner order and use the -center clustering technique with guaranteed approximation .a java code of our method is available for reproducible research .this work was carried out during the matrix information geometry ( mig ) workshop , organized at cole polytechnique , france in february 2011 ( https://www.sonycsl.co.jp/person/nielsen/infogeo/mig/ ) .frank nielsen dedicates this work to the memory of his late father gudmund liebach nielsen who passed away during the last day of the workshop .x. allamigeon , s. gaubert , e. goubault , s. putot , and n. stott , `` a scalable algebraic method to infer quadratic invariants of switched systems , '' in _ embedded software ( emsoft ) , 2015 international conference on _ , oct 2015 , pp . 7584 .j. a. calvin and r. l. dykstra , `` maximum likelihood estimation of a set of covariance matrices under lwner order restrictions with applications to balanced multivariate variance components models , '' _ the annals of statistics _ , pp . 850869 , 1991 .boissonnat , a. crzo , o. devillers , j. duquesne , and m. yvinec , `` an algorithm for constructing the convex hull of a set of spheres in dimension , '' _ computational geometry _ , vol . 6 , no . 2 , pp .123130 , 1996 .boissonnat and m. i. karavelas , `` on the combinatorial complexity of euclidean voronoi cells and convex hulls of -dimensional spheres , '' in _ proceedings of the fourteenth annual acm - siam symposium on discrete algorithms_.1em plus 0.5em minus 0.4emsociety for industrial and applied mathematics , 2003 , pp .305312 .s. jambawalikar and p. kumar , `` a note on approximate minimum volume enclosing ellipsoid of ellipsoids , '' in _ computational sciences and its applications , 2008 .international conference on_.1em plus 0.5em minus 0.4emieee , 2008 , pp .478487 .k. fischer and b. grtner , `` the smallest enclosing ball of balls : combinatorial structure and algorithms , '' _ international journal of computational geometry & applications _ , vol .04n05 , pp . 341378 , 2004 . , `` smaller core - sets for balls , '' in _ proceedings of the fourteenth annual acm - siam symposium on discrete algorithms _ , ser .soda 03.1em plus 0.5em minus 0.4emphiladelphia , pa , usa : society for industrial and applied mathematics , 2003 , pp . 801802 .[ online ] .available : http://dl.acm.org/citation.cfm?id=644108.644240 j. mihelic and b. robic , `` approximation algorithms for the -center problem : an experimental evaluation , '' in _ selected papers of the international conference on operations research ( sor 2002)_.1em plus 0.5em minus 0.4emspringer , 2003 , p. 371 . | matrix data sets are common nowadays like in biomedical imaging where the diffusion tensor magnetic resonance imaging ( dt - mri ) modality produces data sets of 3d symmetric positive definite matrices anchored at voxel positions capturing the anisotropic diffusion properties of water molecules in biological tissues . the space of symmetric matrices can be partially ordered using the lwner ordering , and computing extremal matrices dominating a given set of matrices is a basic primitive used in matrix - valued signal processing . in this letter , we design a fast and easy - to - implement iterative algorithm to approximate arbitrarily finely these extremal matrices . finally , we discuss on extensions to matrix clustering . * keywords * : positive semi - definite matrices , lwner ordering cone , extremal matrices , geometric covering problems , core - sets , clustering . |
the photometric analysis of large galaxies is a double edged sword . while increased resolution , compared to distant galaxies , provides avenues for more detailed analysis of galaxy properties ( to name a few : examination of star formation regions , examination of core parameters to search for massive blackholes , spiral arm analysis , isophotal irregularities that may signal rings , bars or other secular evolution processes ) , it is a data reduction fact that the larger number of pixels complicates the extraction of simple parameters , such as total magnitude , mean surface brightness and isophotal radius .for example , the fact that the galaxy is spread over a larger area of the sky means that the outer pixels have more sky luminosity than galaxy luminosity , increasing the error in any galaxy value .in addition , increased angular size has frequently prevented a fair comparison between distant and nearby galaxy samples simply because the techniques used to extract parameters from nearby galaxies differ from those used on small galaxies .in general , the analysis of large galaxies ( i.e. ones with many pixels ) requires a full surface photometry study of the isophotes and their shape . for most extragalactic systems ,the shape of choice is an ellipse .the astrophysics behind this assumption is that galaxy light traces stellar mass , and stellar mass follows elliptical orbits as given by kepler s 1st law .certainly , this is true the case of early - type galaxies ( elliptical and s0 s ) as demonstrated by studies that examined the residuals from elliptical isophotes ( jedrzejewski 1987 ) .this is also mostly true for disk galaxies , although the lumpiness of their luminosity distribution due to recent star formation increases the noise around each ellipse ( see the discussion in 2.4 ) . for dwarfirregular systems , any regular contours are poor describers of their shape , thus an ellipse is used because it is the simplest shape ( aside from a circle ) to describe an irregular isophote .therefore , the analysis of large galaxies begins with the reduction of their 2d images into 1d surface photometry as described by elliptical isophotes . in turn, the 1d profiles can be fit to various functions in order to extract characteristic luminosities ( stellar mass ) , scale lengths and standard surface brightnesses ( luminosity density ) . when combined with kinematic information, these three parameters form the fundamental plane for galaxies , a key relationship for understanding the formation and evolution of galaxies . aside from direct relevance to the fundamental plane , the need for better galaxy photometric tools has also increased with the influx of quality hst imaging . before hst ,distant galaxies were mere point sources , but now with wfpc2 , acs and nicmos data , there is the need to perform full surface photometric studies on a much larger volume of the universe .the sizes of our database on the photometric structure of galaxies has increased a thousandfold in the last 10 years , but most of the tools used to reduce this new data are up to 20 years out - of - date .thus , the analysis of high resolution space imaging data is far behind spectroscopic and high energy data , not due to lack of interest , but due to the inadequacy of our 2d analysis tools .the goal of this software project ( called the archangel project for obtuse historical reasons ) has been to produce a series of proto - nvo type tools , related to surface photometry , and develop a computing environment which will extend the capability of individual observers ( or entire data centers ) to perform virtual observatory science .there is no attempt herein to replace existing analysis packages ( i.e. pyiraf ) , but rather our goal is to supplement existing tools and provide new avenues for data reduction as it relates to galaxy photometry .we hope that the fundamental components in this package will provide the community with new methods to which they can add their own ideas and techniques as well as provide learning environment for new researchers .in addition , there is growing amount of data by non - optical astronomers as new space missions provide imaging data in wavelength regions previously unexplored .thus , there is a new and growing community of non - optical astronomers with 2d analysis needs that we hope to serve .the tools described herein are not intended to be a complete data reduction package per say , but rather a set of basic modules that allows the user to 1 ) learn the procedures of galaxy photometry , 2 ) tailor the tools to their particular needs , 3 ) begin an advanced learning curve of combining basic modules to produce new and more sophisticated tools . turning raw data ( level 1 or 2 data ) from the telescope ( space or ground ) into calibrated , flattened imagesis the job of other , more powerful packages such as pyraf .the tools presented herein bridge the intermediate step between calibrated data and astrophysically meaningful values . specifically , we are concerned with the analysis of 2d images into 1d values , such as a surface brightness profile , and further tabulation into final values , such as total luminosity or scale length . with respect to galaxy images , the numbers most often valued are luminosity , scale length and luminosity density .unfortunately , due to the extended nature of galaxies , the quality and accuracy of these values can varying depending on the type of value desired .for example , luminosities can be extracted in metric form ( luminosity within 16 kpc ) or isophotal ( luminosity inside the 26.5 mag arcsecs isophote or the total luminosity , an extrapolation to infinite radius .scale length can be expressed as the radius of half - light or a formula fit to the luminosity distribution ( e.g. seric function ) .luminosity density can be described through a detailed surface photometry profile , or integrated as a mean surface brightness within an isophote , or again a fitted curve such as an exponential disk .the tools provided by this project allow an inexperienced user the capability to explore their dataset and extract meaningful results while outlining the limitations to that data .for the experienced researcher , these tools enhance their previous background in data reduction and provide new , and hopefully , faster avenues of analysis . to this end , the tools provided by this packageprovide a user with most basic of descriptions of a galaxy s light , then allowing the option to select any meaningful parameter by toggling a switch . for most parameters , such as aperture magnitudes ,the switch is simple and automatic . for more complicated parameters , such as a profile fit or an asymptotic magnitude ,the switch is understandably more sophisticated and needing more explanation to the user for accurate use .this paper is divided into five sections describing the major components of the reduction package : 1 ) sky determination , 2 ) 2d profile fitting , 3 ) aperture photometry , 4 ) extraction of 1d parameters from surface brightness profiles and 5 ) extracting total magnitudes .each section contains examples of the reduction of galaxy images from the 2mass archive .the fastest way to introduce the techniques and tools used in our package is to walk through the analysis of a several different types of galaxy images .a more non - linear reader can refer to the appendix for a listing of the major tools .a script titled is included which outlines the usage of the routines described below . for a majority of galaxy images, this script will produce a usable surface brightness profile , and this script forms the core of the client / server version of this package ( see 5 ) .but , a sharper understanding of the data requires more interaction with the techniques , the user is encouraged to run through the examples given in the package . to illustrate our tools ,we have selected 2mass images of several galaxies found in the revised shapley - ames catalog with the characteristics of smooth elliptical shape ( ngc 3193 ) , disk shape ( ic 5271 ) , spiral features ( ngc 157 ) and low in surface brightness / low contrast to sky ( ngc 2082 ) .the analysis procedure for each galaxy is divided into five basic parts ; 1 ) sky determination , 2 ) cleaning , 3 ) ellipse fitting , 4 ) aperture photometry and 5 ) profile fitting . before starting itis assumed that the data frame has be initially processed for flatfielding , dark subtraction and masking of chip defects .small defects , such as cosmic rays , are cleaned by the analysis routines .but the errors are always reduced if known features are removed before analysis .the following routines work on poorly flattened data ( e.g. gradients or large - scale features ) , and will signal the poorness by the resulting errors , but the removal of large - scale flattening problems requires more interaction then acceptable for this package and remains the responsibility of the user . any galaxy photometry analysis process begins with an estimate of the image s sky value . while this is not critical for isophote fitting , it is key for actually finding targets , cleaning the frame of stars and smaller galaxies , plus determination of the photometry zeropoints .accurate sky determination will , in the end , serve as the final limit to the quality of surface photometry data since a majority of a galaxy s luminosity distribution is near the sky value .for this reason , sky determination has probably received as much attention in astronomical data literature as any other comparable reduction or analysis problem .the difficulty in sky determination ranges from too few photons to know the behavior of the instrumental response ( e.g. , high energy data ) to a high temporal varying flux of sky photons that overwhelms the galaxy signal ( e.g. , near - ir data ) .surface fitting , drift scans , sky flats and super flats are all procedures used to minimize the sky contribution to the noise levels of the final data .several clever , but not technically challenging , algorithms were included in the noao iraf system to handle time averaged flats and data , median co - adding and cosmic ray subtraction . in the end ,improved ccd quality lowered the demands of sky subtraction as the production of linear , good charge transfer and uniform sensitivity chips replaced the earlier generations and their wildly inaccurate backgrounds .frames for ngc 3193 , an elliptical selected from the rsa sample ( schombert 2007 ) . note the proper cleaning of contaminating stars , even a object near the galaxy core . ] for a cosmetically smooth image , an efficient , but crude , sky fit is one that simply examines the border of the frame and does an iterated average , clipping pixels more than 4 from the mean .a border sky fit is often sufficient to find the starting center of the galaxy ( for the ellipse fitting routines ) , clean the frame of stars / galaxies external to the object of interest ( the ellipse fitting routines will clean along the isophotes , see below ) and provide a preliminary error estimate to the photometry .this error estimate is preliminary in that the true limiting error in the surface ( and aperture ) photometry of large galaxies is not the rms of an isophote , but how well the sky value is know .once the number of pixels involved in a calculation ( be it an isophote or an aperture ) becomes large ( greater than 50 for typical readout noises ) , then the error is dominated by the precision of the sky value .the disadvantage to a border sky fit is the occasional inconvenient occurrence of stars or bright galaxies on the edge of the frame .an iterated mean calculation will remove small objects . andlarge objects will be signaled with large s in an iterative mean search . in an automated procedure ,more than likely , the task will have to halt and request human intervention to find a starting sky value .after years of experimentation , the method of choice for accurate sky determination for extended galaxies is to evaluate sky boxes .this is a procedure where boxes of a set sized are placed semi - randomly ( semi in the sense of avoiding stars and other galaxies ) in the frame .an algorithm calculates an iterative mean and for each box .these means ( and s ) are then super - summed to find the value of the sky as the mean of the means ( and likewise , the error on the sky value is the on this mean ) . from an analysis point of view , there are several advantages to this technique .one is that each box exists as a measurement independent of the other boxes .thus , the statistical comparison of the boxes is a real measure of the quality of the sky determination for the frame in terms of its accuracy and any gross trends with the frame .another advantage is that contaminating objects are relatively easy to avoid ( visual choice of sky boxes ) or to sort by the higher per box .lastly , sky boxes are the easiest method of finding regions for sky determination outside the galaxy itself , particularly where an irregular object may fill a majority of the data frame .the most difficult decision in sky determination by boxes is , of course , where to place the boxes .when done visually , the user selects region ( usually with a cursor ) that are free of stars and sufficiently far away from the target galaxy to be clear of its envelope light . for an automated process, the procedure returns for a final sky estimate after the ellipse fitting process is completed and when all the stars / galaxies are cleaned ( set to values of not - a - number , nan ) .then , the outer edge of the large galaxy is determined and an iterative analysis of sky boxes outside this radius is used to determine the true sky and , most importantly , the variation on the mean of those boxes as a measure of how well the sky is known .this procedure is the role of _ sky_box _ , see the appendix for a more detailed description of its options .reduction of a 2d image into a 1d run of intensity versus radius in a galaxy assumes some shape to the isophote. very early work on galaxies used circles since the data was obtained through circular apertures in photoelectric photometers . for early type galaxies ,the ellipse is the shape that most closely follows the shape of the isophotes .this would confirm that the luminosity being traced by an isophote is due to stellar mass , which follow elliptical orbits ( kepler s 1st law ) .as one moves to along the hubble sequence to later type galaxies , the approximation of an ellipse to the isophotes begins to break down due to recent star formation producing enhancements in luminosity density at semi - random positions .however , no consistent shape describes the isophotes of irregular galaxies , so an ellipse is the best shape , to first order , and provides a common baseline for comparison to more regular galaxies .image of ic 5271 , a sb(rs ) galaxy selected from the rsa catalog .typical of the isophotes for a disk / bulge galaxy , there is a cross over point as one transitions from a more spherical bulge to a flattened disk .while this is flagged by the reduction software , it is astrophysically real and signals the lens morphology often seen in surface brightness profiles of disk galaxies . ]fitting a best ellipse to a set intensity values in a 2d image is a relatively straight forward technique that has been pioneered by cawson ( 1987 ) and refined by jedrzejewski ( 1987 ) ( see also an excellent review by jensen & jorgensen 1999 ) . the core routine from these techniques ( prof ) was eventually adopted by stsdas iraf ( i.e. ellipse ) .the primary fitting routine in this package follows the same techniques ( in fact , uses much of the identical fortran code from the original gasp package of cawson ) with some notable additions .these codes start with an estimated x - y center , position angle and eccentricity to sample the pixel data around the given ellipse .the variation in intensity values around the ellipse can be expressed as a fourier series with small second order terms .then , an iterative least - squares procedure adjusts the ellipse parameters searching for a best fit , i.e. minimized coefficients .there are several halting factors , such as maximum number of iterations or minimal change in the coefficients , which then moves the ellipse outward for another round of iterations .once a stopping condition is met ( edge of the frame or sufficiently small change in the isophote intensity ) , the routine ends . a side benefit to above procedureis that the cos(4 ) components to each isophote fit are easily extracted , which provides a direct measure of the geometry of the isophote ( i.e. boxy versus disk - like , jedrzejewski 1987 ) .one new addition , from the original routines , is the ability to clean ( i.e. mask ) pixels along an isophote .basically , this routine first allows a few iterations to determine a mean intensity and rms around the ellipse .any pixels above ( or below ) a multiple of the rms ( i.e. 3 ) are set to not - a - number ( nan ) and ignored by further processing .due to the fact that all objects , stars and galaxies , have faint wings , a growth factor is applied to the masked regions .while this process is efficient in early - type galaxies with well defined isophotes , it may be incorrect in late - type galaxies with bumpy spiral arms and hii regions .the fitting will be smoother , but the resulting photometry will be underestimated .this process can be controlled early in the analysis pipeline by the user with an initial guess of the galaxy s hubble type .also , the erased pixels are only temporary stored until an adequate fit is found . once a satisfactory ellipse is encountered , only then are the pixels masked for later ellipse fitting .the masked data is written to disk at the end of the routine as a record of the cleaning .the ellipse fitting is the function of _ efit _ as described in the appendix . for early - type galaxies , lacking any irregular features ,the cleaning process is highly efficient .the pipeline first identifies the galaxy and its approximate size by moment analysis .it then cleans off stars / galaxies outside the primary galaxy by moment identification and radius growth for masking .stars / galaxies inside the primary galaxy are removed by the ellipse fitting routine .the resulting ellipses are inspected for crossover ( isophotes that crossover are assumed to be due to errors or embedded stars / galaxies and removed by averaging nearby ellipse isophotes , this is not true for disk galaxies ) .the smoothed ellipses are used by a more robust cleaning algorithm and the whole ellipse fitting process is repeated on the cleaned frame .image of ngc 157 , a face - on sc(s ) galaxy selected from the rsa catalog .similar to ic 5271 , there are several crossover points in the fitted ellipses .the fitting program does a good job of following the spiral arms in the inner regions , then a large jump from bulge to disk region . ]an example of the analysis of an elliptical is found in figure 1 , a 2mass image of ngc 3193 .the top panel is the raw 2mass image , the bottom panel is the resulting cleaned image output at the end of the reduction process .the cleaning process efficiently removed all the stars on the frame , including the brighter object on the northern edge of the frame and its diffraction spikes .the star closest to the galaxy core is a problem in two arenas .the first is in the calculation of ellipse , as the inner star would drag the calculated moments off center .the isophote erasing routine has handled this as can be seen in figure 2 , where the fitting ellipses are shown and are not deflected by the erased star .second , is that calculated total magnitudes would either be over estimated ( if the star is not masked ) or under estimated ( if the star is masked and the galaxy light from those pixels is not replaced ) .this problem will be discussed in 2.6 .as one goes towards later type galaxies , there is an increase in the non - elliptical nature to their isophotes and an increase in luminosity density enhancements ( hii regions , stellar clusters , spiral features ) which are legitimate components to the galaxy s light distribution and should not be cleaned .the user can specify the galaxy type and the cleaning restrictions will be tightened ( only to stellar objects and at a higher cleaning threshold ) plus the restrictions on overlapping ellipses is loosened ( e.g. the transition from a round bulge to a flat disk ) .most importantly , while some galaxy features are cleaned for the sake of a harmonious ellipse fit , those pixels need to be filled for later aperture photometry .image of ngc 2082 , a lsb disk galaxy .note , that the ellipse fitting routine expanded the annulus size to increase the s / n . ] an example of this behavior can be found in figure 3 , the image of ic 5271 .the red ellipses indicate isophote fits that crossover .while flagged as an error , this is in fact the real behavior of the isophotes as one transitions from bulge to disk .the resulting intensities are probably overestimated due to the crossover effect , but this error will be minor compared to the errors that would result from an off - center or overly round ellipse .the quality of the fitting procedure can be judged by the behavior of the ellipse parameters such as eccentricity , position angle and center .if there are large jumps in any of the parameters that determine shape , then this may signal a feature in the galaxy that needs to be cleaned ( a buried star for example ) .slightly less abrupt changes may signal an astrophysically interesting features , such as a bar or lens morphology . under the assumption that the isophotes of a typical galaxy are a smooth function with radius , the ellipse fitting algorithm checks for ellipse parameters that indicate a crossing of the isophotal lines .these ellipses are smoothed and flagged ( the mean of the inner and outer ellipse parameters is used ) . in certain scenarios ,crossing isophotes are to be expected , for example the transition region from a bulge to a disk ( see figure 2 ) , and the smoothing criteria is relaxed .this is the function of as described in the appendix .an example of s corrections can be seen in figure 4 , the image of ngc 157 .several interior ellipses display erratic behavior , but took the mean average of nearby ellipses ( in green ) to produce a more rational fit .the resulting intensities were also more stable , although the rms is going to be highly than the typical isophotes found in an elliptical .an example of a lsb galaxy fit is found in figure 5 .the ellipse fitting routine , recognizing that the target is low in contrast with respect to sky , widened the annulus for collecting pixel values .this increases the s / n at some loss of spatial resolution .since resolution is usually not important in a galaxy s halo region , this is an acceptable trade off .lsb galaxies are susceptible to fitting instability , the fitting routines are tightened against rapid changes in eccentricity and centering to prevent this behavior .lsb galaxies also demonstrate a key point in determining errors from surface photometry .there are two sources of error per isophote , the rms around the ellipse and the error in the sky value .the rms value is a simple calculation using the difference between the mean and the individual pixel values .this rms then reflects into an observable error as the .however , as the isophote intensity approaches the sky value , the number of pixels increases and the error due to rms becomes an artificially low value .in fact , at low intensities , the knowledge of the sky value dominates and the error in the isophote is reflected by the sky error ( preferably as given by the on the means of a large number of sky boxes ) . with a file of isophotal intensities versus radius in hand , it is a simple step to producing a surface brightness profile for the galaxy .there are a few tools are in the package to examine the quality of the ellipse fitting ( e.g. , an interactive comparison of the image and the ellipses ) . at the very least ,a quick visual inspection of the ellipses seems required as a bad mismatch leads to strongly biased results ( see cautionary tale in 4 ) .a user can either step through a directory of data files ( e.g. using the tool ) or a user can automatically produce a group of gif images with a corresponding html page , then use a browser to skim through a large number of files .calibration from image data numbers ( dn ) to fluxes ( or magnitudes ) is usually obtained through standard stars with corrections for airmass and instrumental absorption .if these values are in the fits headers , then they are automatically added to the object s xml file .additional corrections for galactic absorption , k - corrections and surface brightness dimming are well documented in the literature and can be assigned automatically by grabbing xml data from ned .a chosen cosmology converts radius in pixel units into astrophysically meaningful values of kiloparsecs .a python command line script ( ) based on ned wright s cosmology calculator is included in the package all these values can be added to the xml file for automatic incorporation to the analysis programs . if they do nt exists , then instrumental mags will be used , which can easily be converted to real units later on. fits to disk and bulge .the solid line is the addition of the two curves . ]analysis of a 1d surface brightness profile ( the job for the tool ) depends on the scientific goals of the user .for example , early - type galaxies are typically fit to a de vaucouleurs r curve to extract a scale length ( effective radius ) and characteristic surface brightness ( effective surface brightness ) .irregular and dwarf galaxies are well fit by exponential profiles which provide a disk scale length and central surface brightness .disk galaxies can be fit with a combination of bulge and disk fits , to extract b / d ratios and disk scale lengths . due to this combination of and exponential curves for large bulge spirals , it is computationally impossible to correctly determine which function , or combination of functions , best fits a particular galaxy s profile . in the past, one would examine the 2d image of the galaxy and obvious disk - like galaxies would be fit to plus exponential .objects with elliptical appearance were fit to a strict shape .this produces a problem for large bulge s0 s which are difficult to detect visually unless nearly edge - on .the simplest solution to this problem , using only the 1d surface photometry , is to examine the profiles in a plot of mag arcsecs versus linear radius . with this plot ,exponential disks appear as straight lines , see figure 6 as an example of a pure disk in ngc 2403 .bulge plus disk components are also straight forward in this mag / linear radius space , see figure 7 a good example of a bulge plus disk fit in ngc 3983 . if a profile displays too much curvature , with no clear linear disk portion , then it is a good candidate for a pure fit ( see figure 8 , ngc 3193 ) .this option is easily checked by plotting the profile in mag arcsecs versus r space as shown in figure 8 .most r profiles only have a linear region in the middle of the surface brightness profile , typically with a flattened core and fall - off at large radii ( see schombert 1987 ) .fit is shown . ]the seric function is also popular for fitting surface brightness profiles ( graham & driver 2005 ) , although not currently supported by this package , any fitting function is easy to add to the reduction routines as the core search routine is a grid search minimization technique .however , there are issues with surface photometric data where the inner regions have the highest s / n but the outer regions better define a galaxy s structure ( schombert & bothun 1987 ) . with user guidance ,this grid search works well for any user defined function .also , since there are a sufficient number of packages for fitting 1d data in the community , this package only provides a simple graphic plotting function .more sophisticated analysis needs guidance by the user , but this package provides the framework for just such additions .often the scientific goal of a galaxy project is to extract a total luminosity for the system ( and colors for multiple filters ) . for small galaxies ,a metric aperture or isophotal magnitude is suitable for comparison to other samples ( certainly the dominate source of error will not be the aperture size ) .however , for galaxies with large angular size ( i.e. many pixels ) , their very size makes total luminosity determination problematic .natively , one would think that a glut of pixels would make the problem of determining a galaxies luminosity easier , not more difficult .however , the problem here arises with the question of where does the galaxy stop ? or , even if you guess an outer radius , does your data contain all the galaxy s light ?the solution proposed by de vaucouleurs decades ago is to use a curve of growth ( de vaucouleurs 1977 ) .almost all galaxies follow a particular luminosity distribution such that the total light of a galaxy can be estimated by using a standard growth curve to estimate the amount of light outside your largest aperture . for a vast majority of galaxies , selecting either an exponential or r curve of growth is sufficient to adequately describe their total luminosities ( burstein 1987 ) .however , for modern large scale ccd imaging , the entire galaxy can easily fit onto a single frame and there is no need for a curve of growth as all the data exists in the frame . with adequate s / n, it would seem to be a simple task to place a large aperture around the galaxy and sum the total amount of light ( minus the sky contribution ) .however , in practice , a galaxy s luminosity distribution decreases as one goes to larger radii , when means the sky contribution ( and , thus , error ) increases . in most cases , larger and larger apertures simply introduce more sky noise ( plus faint stars and other galaxies ) . and , to further complicate matters , the breakover point in the optical and near - ir , where the galaxy light is stronger than the sky contribution will not contain a majority of the galaxy s light .so the choice of a safe , inner radius will underestimate the total light .the procedure selected in this package , after some numerical experimentation , is to plot the aperture luminosity as a function of radius and attempt to determine a solution to an asymptotic limit of the galaxy s light .this procedure begins by summing the pixel intensities inside the various ellipses determined by . for small radii, a partial pixel algorithm is used to determine aperture luminosity ( using the surveyors technique to determine each pixel s contribution to the aperture ) . at larger radii , a simply sum of the pixels , andthe number used , is output .in addition , the intensity of the annulus based on the ellipse isophote and one based on the fit to the surface photometric profile are also outputted at these radii ( see below ) .note that a correct aperture luminosity calculation requires that both a ellipse fit and a 1d fit to the resulting surface photometry has be made .the ellipse fit information is required as these ellipses will define the apertures , and masked pixels are filled with intensities given by the closest ellipse .a surface photometric fit allows the aperture routine to use a simple fit to the outer regions as a quick method to converge the curve of growth .once the aperture luminosities are calculated , there are two additional challenges to this procedure .the first is that an asymptotic fit is a difficult calculation to make as the smallest errors at large radii reflect into large errors for the fit .two possible solutions are used to solve this dilemma .the first solution is to fit a 2nd or 3rd order polynomial to the outer radii in a luminosity versus radius plot .most importantly for this fit , the error assigned the outer data points is the error on the knowledge of the sky , i.e. the rms of the mean of the sky boxes .this is the dominant source of error in large apertures and the use of this error value results in a fast convergence for the asymptotic fit .the resulting values from the fit will be the total magnitude and total isophotal size , determined from the point where the fit has a slope of zero .a second solution is to use an obscure technique involving rational functions .a rational function is the ratio of two polynomial functions of the form where and are the degree of the function .rational functions have a wide range in shape and have better interpolating properties than polynomial functions , particularly suited for fits to data where an asymptotic behavior is expected .a disadvantage is that rational functions are non - linear and , when unconstrained , produce vertical asymptotes due to roots in the denominator polynomial .a small amount of experimentation found that the best rational function for aperture luminosities is the quadratic / quadratic form , meaning a degree of 2 in the numerator and denominator .this is the simplest rational function and has the advantage that the asymptotic magnitude is simply , although is best evaluated at some radii in the halo of the galaxy under study .usually the aperture luminosity values will not converge at the outer edges of a galaxy .this is the second challenge to aperture photometry , correct determination of the luminosity due to the faint galaxy halo .this is where the surface photometry profile comes in handy .contained in that data is the relationship between isophotal luminosity and radius , using all the pixels around the galaxy .this is often a more accurate number than attempting to determine the integrated luminosity in an annulus at the same radius .this information can be used to constraint the curve of growth in two ways .one , we can use the actual surface brightness intensities and convert them to a luminosity for each annulus at large radii . then, this value can be compared to the aperture value and a user ( or script ) can flag where the two begin to radically deviate .often even the isophotal intensities will vary at large radii and , thus , a second , more stable method is to make a linear fit of an exponential , r or combined function to the outer radii and interpolate / extrapolate that fit to correct the aperture numbers .figure 9 display the results for all three techniques for the galaxy ngc 1003 .the black symbols are the raw intensities summed from the image file .the blue symbols are the intensities determined from the surface photometry .the orange symbols are the intensities determined from the fits to the surface photometric profile .this was one of the worst case scenarios due to the fact the original image is very lsb ( in the near - ir band ) .due to noise in the image and surface photometry , the outer intensities grow out of proportional to the light visible in the greyscale figure .a fit to the raw data does not converge ( blue line ). a 2nd order fit to the profile fit ( orange line ) also fails to capture the asymptotic nature .the rational function fit ( pink line ) does converge to an accurate value .if similar types of galaxies are being analyzed , it is a simple procedure to automate this process .in the past , when disk space was at a premium and i / o rates were slow , astronomical data was stored in machine specific formats . however , today disk space is plentiful and file access times are similar to processing times on most desktop systems .thus , a majority of simple astronomical databases are stored in flat file format , also called plain text or ascii files ( note : this is an interesting throwback to the original data methods from the end of the 19th century , where information is stored as a system of data and delimiters , such as spaces or commas ) .the endproduct data files for most packages rarely exceed a few kilobytes or a few hundred lines .the simplest access to these types of files is an editor such as vi or emacs .sufficient documentation ( i.e. header files ) makes understanding the data , and writing applications for further analysis these data files , a relatively simple task .however , there is a strong driver to migrate output files into xml format for even the simplest data files .extensible markup language ( xml ) is a w3c - recommended general - purpose markup language that is designed store and process data .the core of xml is the use of tags , similar to html tags ( i.e. ` < tag > data</tag > ` ) , to delineate data values and assign attributes to those values ( for example , units of measure ) .xml is not terse and , therefore , somewhat human - legible ( see figure 1 for xml example of astronomical data ). scans .the red ellipses are fits from the 2mass galaxy pipeline. the blue ellipses are the resulting fits from this package .the 2mass fits clearly fail to follow the flatter isophotes in the outer regions .this results in an underestimate of these isophote s intensity values , as seen in the surface brightness profile in the bottom panel . ]xml has several key advantages over plain file formats .for one , xml format allows an endless amount of additional information to be stored in each file that would not have fit into the standard data plus delimiter style .for example , calibrating data , such as redshift or photometric zeropoint , can be stored in each file along with the raw data with very little increase in file size overhead as the tags handle the separation .there is no need to reserve space for these quantities nor is there any problem adding future parameters to the xml format .using xml format puts all the reduction data into a single file for compactness and , in addition , since xml files are plain text files , there is no problem with machine to machine transfer .the reading of xml files is not a complication for either compiled or interpreted languages .a disadvantage to xml format is that it s clumsy to read .however , there exist a number of excellent xml editors on the market ( for example , http://www.oxygenxml.com ) .these allow a gui interface with an efficient query system to interact with the xml files .while many users would prefer to interact with the raw data files in plain text form , in fact , even a simple editor is gui window into the bytes and bits of the actual data on machine hardware .a gui xml editor is simply a more sophisticated version of vi or emacs. scans .the red ellipses are fits from the 2mass galaxy pipeline. the blue ellipses are the resulting fits from this package .the 2mass galaxy pipeline failed to follow the isophote twists ( i.e. changes in position angle ) .this results in an underestimate of these isophote s intensity values , as seen in the surface brightness profile in the bottom panel . ]an additional reason to migrate to xml is a new power that xml data files bring to data analysis .many interpreted languages ( i.e. python and perl ) have an _ eval _ or _ exec _ function , a method to convert xml data into actual variables within the code at runtime ( i.e. dynamically typed ) .this has a powerful aspect to analysis programs as one does not have to worry about formats or the type of data entries , this is handled in the code itself .dynamical typing introduces a high level of flexibility to code . in python, one can convert xml data ( using python s own xml modules to read the data ) into lists that contain the variable name and value , then transform these lists into actual code variables using an _ exec _ command . for example , .... for var , value in xml_vars : exec(var+'='+value ) .... produces a set of new variables in the running code . and python sunique try / except processing traps missing variables without aborting the routine .for example , if the variable redshift exists in the xml data file then .... try : distance = redshift / h_o except : print ' redshift undefined ' distance = std_distance .... this same try / except processing also traps overflows and other security flaws that might be used by a malicious user attempting to penetrate your server using the xml files .thus , xml brings a level of security as well as enhancing your code .lastly , another advantage to xml format is the fact that all of the reduction data ( ellipse fitting , aperture photometry , calibration information , surface photometry ) can be combined into a single file , e.g. galaxy_name.xml , which can be interrogated by any analysis routine that understands xml .a simple switch at the end of the reduction process integrates the data into an xml file for transport , or access by plotting packages , etc .if you have read this far , and are still awake , this section walks through the reduction of part of the revised shapley - ames sample ( schombert 2007 ) taken from the 2mass database that overlaps the 2mass large galaxy atlas ( jarrett 2003 ) . as a cautionary tale to the importance to doing large galaxy photometry with care , we also offer in this section a comparison of our technique with the results from an automated , but much cruder reduction pipeline from the 2mass project . allowing the ellipses to vary from isophote to isophote , not only in eccentricity , but also in position angle and ellipse center , are critical to obtaining an accurate description of a galaxy s luminosity profile . shown in figures 10 , 11 and 12are examples of images extracted from the 2mass archives that were part of the large galaxy atlas ( jarrett 2003 ) . in each case, the 2mass pipeline calculates a luminosity profile based on the isophotes around an ellipse from the mean moments of the whole galaxy .thus , the fitted ellipses do not change in axial ratio or position angle and , for most spirals , this technique will result in an ellipse that is too flat in the core and , often , too round in the halo regions .if the galaxy has a bar , this technique will also underestimate the bar contribution , spreading its light into larger radii ellipses .the red ellipses are fits from the 2mass galaxy pipeline. the blue ellipses are the resulting fits from this package . the 2mass galaxy pipeline , for some strange reason , assumed a circular shape to was is clearly an elongated galaxy .this results in an underestimate of these isophote s intensity values , as seen in the surface brightness profile in the bottom panel . ]given that the light is averaged around the ellipse , this effect may be minor if the galaxy is fairly smooth and uniform .however , galaxies that are smooth and regular are a minority in the local universe . for the three examples , shown in figure 10 , 11 and 12 ,the 2mass fits consistently underestimate the amount of disk light per isophote , as seen in comparison to the luminosity profile determined from the raw data using the archangel routines .this , in turn , results in fitted central surface brightnesses that are too bright in central surface brightness , and fitted disk scale lengths that are too shallow .in fact , for 49 galaxies in common between the near - ir rsa sample ( schombert 2007 ) and the 2mass large galaxy atlas , figure 13 displays the difference between the fitted disk scale lengths ( ) and the difference between the fitted disk central surface brightness ( ) . given the typical s , the error in 2mass fits corresponds to a 50% error in a galaxy s size .likely , errors in the central surface brightness fits averages around 0.5 mags .thus , not using the proper reduction technique not only increases the noise in the measured parameters , but produces a biased result . ) from fits of an exponential law to 2mass surface photometry versus this package s surface brightness reduction .the bottom panel displays the difference in fitted center surface brightnesses between the 2mass profiles and the new rsa sample .typically , the 2mass pipeline overestimates the eccentricity ( too round ) which results in smaller scale lengths and brighter central surface brightnesses ., width=642 ]one of the more powerful modules to the python language is the , the module that allows python scripts to download any url address . if address is a web page , therealso exist several addition modules that parse html and convert html tables into arrays .this means a simple script can be written to pull down a web page , parse it html and extract a data into table format . and ,on top of this procedure , the information could be then be used into a standard get / post web form used by many data archives . as an example, the package contains _ dss_read _ , a script that takes the standard name for a galaxy , queries ned for its coordinates and then goes to the dss website and extracts the pss - ii image of the galaxy . while this sounds like a computationally intense task , in fact the script is composed of 49 lines .the downside to this network power is , of course , the possibility of abuse .unrestricted application of such scripts will overload websites and given network speeds , the typical user does nt need their own personal digital sky at their home installation .lastly , various archives , in order to slow massive downloads , have an id / password interface .to penetrate these sites requires the _ mechanize _ module which simulates the actions of a brower , following links , parsing i d s and passwords and handling cookies .while these avatars are simple to build , the wise usage of them remains a key challenge for the future .the fastest way to learn a data reduction process is to jump in and try it . to this end, the tarball contains all the images discussed in this document , and several test images with known output .this allows the user to practice on images where the final results are known .thus , we encourage the readers to download , compile and run !tarballs are found at http://abyss.uoregon.edu//archangel .another option , for the user who does nt wish to set - up the package on their own system ( or perhaps only has a handful of galaxies to reduce ) , is the client / server version of this package available at http://http://abyss.uoregon.edu/ / nexus ( see figure 14 ) .although more limited in its options , the web version has the advantage of speed ( it s run on a solaris sun blade ) and a fast learning curve . as to the future ,a number of tools need to be added to this package .for example , quantitative morphology uses the concentration and asymmetry indices to parameterize a galaxy s global structure .while these values are easy to extract from small angular size objects , they are a challenge for large systems . yet, a detailed comparison of these values to visual morphology is a key step in understanding quantitative morphology at higher redshifts .however , in order get the current tools out to the community , the package is frozen .additional tools will be added to the package website as , in order of priority , 1 ) needed by the pi to meet various science goals , 2 ) requested by outside users to obtain their science goals , and 3 ) requested by outside users as possible new computational areas to explore . as with all evolving software ,an interested user should contact the author to see where future directions lie ( js.uoregon.edu ) .this project was funded by joe bredekamp s incredible nasa s airs program .i am grateful to all the suggestions i have gotten from airs pi s at various workshops and panel reviews .the program is a mixed of technology plus science types and is one of nasa s true gems for innovative research ideas .burstein , d. , davies , r. l. , dressler , a. , faber , s. m. , stone , r. p. s. , lynden - bell , d. , terlevich , r. j. , & wegner , g. 1987 , , 64 , 601 cawson , m. g. m. , kibblewhite , e. j. , disney , m. j. , & phillipps , s. 1987 , , 224 , 557 graham , a. w. , & driver , s. p. 2005 , publications of the astronomical society of australia , 22 , 118 jarrett , t. h. , chester , t. , cutri , r. , schneider , s. e. , & huchra , j. p. 2003, , 125 , 525 jedrzejewski , r. i. 1987 , , 226 , 747 schombert , j. m. , & bothun , g. d. 1987 , , 93 , 60 de vaucouleurs , g. 1977 , , 33 , 211this package is a combination of fortran and python routines .the choice of these languages was not arbitrary .python is well suited for high level command processing and decision making .it is a clear and expressive language for text processing .therefore , its style is well suited to handling file names and data structures .since it is a scripting language , it is extremely portable between os s .currently , every flavor of unix ( linux , mac os x and solaris ) comes packaged with python .in addition , there is a hook between the traditional astronomy plotting package ( pgplot ) and python ( called ppgplot ) , which allows for easy gui interfaces that do not need to be compiled .the use of fortran is driven by the fact that many of the original routines for this package were written in fortran .for processing large arrays of numbers , c++ provides a faster routine , but current processor speeds are such that even a 2048x2048 image can be analyzed with a fortran program on a dual processor architecture faster than the user can type the next command .stsci provides a hook to fits formats and arrays ( called pyfits and numarray ) , but python is a factor of 100 slower than fortran for array processing .currently there are three fortran compilers in the wild , g77 , gfortran and g95 .the routines in this package can use any of these compilers plus a version of python greater than 2.3 .cfitsio is required and avaliable for all os s from its gsfc website .the python libaries pyfits and numarray are found at stsci s pyraf website .for any graphics routines , the user will need a verson of pgplot and install ppgplot as a python library .the ppgplot source is avaliable at the same website as this package .the graphics routines are only needed for data inspection , the user should probably develop their own high - level graphics to match their specifics . in the directory/util one can find all the python subroutines to fit 1d data surface or aperture photometry .the examples in this manual will guide you in constructing your own interface .lastly , the output data files for this package are all set in xml format .this format is extremely cumbersome and difficult to read ( it is basically an extension of the html format that web browsers use ) .however , a simple command line routine is offered ( ) that will dump or add any parameter or array out of or into a xml file .to go from a raw data frame containing a galaxy image to a final stage containing ellipse fits , surface photometry , profile fits and aperture values requires three simple scripts , _ profile _ , _ bdd _ and _ el_. the scripts _ profile _ and _ el _ are automatic and can be run as batch jobs ._ bdd _ is an interactive routine to fit the surface photometry and is a good mid - point to study the results of the ellipse fitting . in a majority of cases , the user simply needs to run those three scripts with default options to achieve their science results ..... usage : sky_box option file_name box_sizeprf_file options : -h = this message -f = first guess of border -r = full search , needs box_size and prf_file -t = full search , needs box_size -c = find sky for inner region ( flats ) needs x1,x2,y1,y2 boundarys output : 1st mean , 1st sig , it mean , it sig npts , iterations ....ellipse fitting routine , needs a standard fits file , output in .prf file format ( xml_archange converts this format into xml ) options : -h = this mesage -v= output each iteration -q = quiet -xy = use new xc and yc -rx = max radius for fit -sg = deletion sigma ( 0=no dets ) -ms = min slope ( -0.5 ) -rs = stopping radius -st = starting radius when deleting , output fits file called file_name.jedsub .... cursor commands : / = abort q = move to next frame c = contrast r = reset zoom z = zoom t = toggle ellipse plot p = peek at values a,1 - 9 = delete circle b = delete box .... window # 2 cursor commands : x = erase point d = disk fit only m = erase all min pts f = do bulge+disk fit u = erase all max pts e = do r**1/4 fit only b = redo boundaries p = toggle 3fit/4fit q = abort r = reset graphics / = write .xml file and exit .... add or delete data into xml format -o = output element value or array -d = delete element or array -a = replace or add array , array header and data is cat'ed into routine -e = replace or add element -c = create xml file with root element -k = list elements , attributes , children ( no data ) .... | photometry of galaxies has typically focused on small , faint systems due to their interest for cosmological studies . large angular size galaxies , on the other hand , offer a more detailed view into the properties of galaxies , but bring a series of computational and technical difficulties that inhibit the general astronomer from extracting all the information found in a detailed galaxy image . to this end , a new galaxy photometry system has been developed ( mostly building on tools and techniques that have existed in the community for decades ) that combines ease of usage with a mixture of pre - built scripts . the audience for this system is a new user ( graduate student or non - optical astronomer ) with a fast , built - in learning curve to offer any astronomer , with imaging data , a suite of tools to quickly extract meaningful parameters from decent data . the tools are available either by a client / server web site or by tarball for personal installation . the tools also provide simple scripts to interface with various on - line datasets ( e.g. 2mass , sloan , dss ) for data mining capability of imaged data . as a proof of concept , we preform a re - analysis of the 2mass large galaxy atlas to demonstrate the differences in an automated pipeline , with its emphasis on speed , versus this package with an emphasis on accuracy . this comparison finds the structural parameters extracted from the 2mass pipeline is seriously flawed with scale lengths that are too small by 50% and central surface brightness that are , on average , 1 to 0.5 mags too bright . a cautionary tale on how to reduce information - rich data such as surface brightness profiles . this document and software can be found at http://abyss.uoregon.edu//archangel . |
current research in music - information - retrieval(mir ) is largely limited to western music cultures and it does not address the north - indian - music - system hereafter nims , cultures in general .nims raises a big challenge to current rhythm analysis techniques , with a significantly sophisticated rhythmic framework . we should consider a knowledge - based approach to create the computational model for nims rhythm .tools developed for rhythm analysis can be useful in a lot of applications such as intelligent music archival , enhanced navigation through music collections , content based music retrieval , for an enriched and informed appreciation of the subtleties of music and for pedagogy .most of these applications deal with music compositions of polyphonic kind in the context of blending of various signals arising from different sources .apart from the singing voice , different instruments are also included . as per rhythmrelates to the _ patterns of duration _ that are phenomenally present in the music .it should be noted that that these _ patterns of duration _ are not based on the actual duration of each musical event but on the inter onset interval(ioi ) between the attack points of successive events . as per [ ] ,an accent or a stimulus is marked for consciousness in some way .accents may be phenomenal , i.e. changes in intensity or changes in register , timbre , duration , or simultaneous note density or structural like arrival or departure of a cadence which causes a note to be perceived as accented .it may be metrical accent which is perceived as accented due to its metrical position [ .percussion instruments are normally used to create accents in the rhythmic composition . the percussion family which normally includes timpani ,snare drum , bass drum , cymbals , triangle , is believed to include the oldest musical instruments , following the human voice [ ] .the rhythm information in music is mainly and popularly provided by the percussion instruments .one simple way of analyzing rhythm of a composite or polyphonic music signal having some percussive component , may be to extract the percussive component from it using some source separation techniques based on frequency based filtering .various attempts have been made in western music to develop applications for re - synthesizing the drum track of a composite music signal , identification of type of drums played in the composite signal[ex .the works of etc ., described in section [ past ] in detail ] .human listeners are able to perceive individual sound events in complex compositions , even while listening to a polyphonic music recording , which might include unknown timbres or musical instruments .however designing an automated system for rhythm detection from a polyphonic music composition is very difficult . in the context of nims rhythmpopularly known as _ tla _ , _ tabl _ is the most popular percussive instrument .its right hand drum-_dayan _ and left hand drum-_bayan _ are played together and amplitude - peaks spaced at regular time intervals , are created by playing every stroke .one way of rhythm information retrieval from polyphonic composition having _ tabl _ as one of the percussive instruments , may be to extract the _ tabl _ signal from it and analyze it separately .the _ dayan _ has a frequency overlap with other instruments and mostly human - voice for polyphonic music , so if we extract the whole range of frequencies for both _ bayan _ and _ dayan _ components , by existing frequency based filtering methods , the resultant signal will be a noisy version of original song as it will still have part of other instruments , human voice components along with _tabl_. also conventional source separation methods lead to substantial loss of information or sometimes addition of unwanted noise .this is the an area of challenge in _ tla _ analysis for nims .although , nims _ tla _ functions in many ways like western meter , as a periodic , hierarchic framework for rhythmic design , it is composed of a sequence of unequal time intervals and has longer time cycles .moreover _ tla _ in nims is distinctively cyclical and much more complex compared to western meter [ ] .this complexity is another challenge for _ tla _ analysis . due to the above reasons defining a computational framework for automatic rhythm information retrieval for north indian polyphonic compositionsis a challenging task .very less work has been done for rhythmic information retrieval from a polyphonic composition in nims context . in western music , quite a few approaches are followed for this purpose , mostly in the areas of beat - tracking , tempo analysis , annotation of strokes / pulses from the separated percussive signal .we have described these systems in the section [ past ] . for nims ,very few works of rhythm analysis are done by adopting western drum - event retrieval system .these works result in finding out meter or speed which are not very significant in the context of nims .hence this is an unexplored area of research for nims . in this workwe have proposed a completely new approach , i.e.instead of extracting both _bayan _ and _ dayan _ signal , we have extracted the _ bayan _ signal from the polyphonic composition by using band - pass filter .this filter extracts lower frequency part which normally does not overlap with the frequency of human voice and other instruments in a polyphonic composition .most of the _ tla_-s start with a _ bol _ or stroke which has a _ bayan _ component(either played with _ bayan _ alone or both _bayan _ and _ dayan _ together ) and also the some consequent section(_vibhga _ in nims terminology ) boundary-_bol_-s have similar _ bayan _ component .hence these strokes would be captured in the extracted _ bayan _ signal . for a polyphonic composition, its _ tla _ is rendered with cyclically recurring patterns of fixed time - lengths .this is the cyclic property of nims , discussed in detail in section [ defn ] . so after extracting the starting _bol_-s and the section boundary strokes from the _ bayan _ signal , we can exploit the cyclic property of a _ tla _ and the pattern of strokes appearing in a single cycle and can detect important rhythm information from a polyphonic composition .this would be a positive step towards rhythm information retrieval from huge collection of music recordings for both film music and live performances of various genres of _ hindi _ music . here , we consider the _ tla _ detection of different single - channel , polyphonic clips of _ hindi _ vocal songs of devotional , semi - classical and movie soundtracks from nims , having variety of tempo and _ mtr_-s . the rest of the paper is organized as follows .a review of past work is presented in section [ past ] .some definitions are provided in section [ defn ] . in section [ meth ]the proposed methodology is elaborated .experimental results are placed in section [ exp ] and the paper ends with concluding remarks in section [ con ] .the basic identifying features of rhythm or _ tla _ in nims are described as follows . * * _ tla _ and its cyclicity : * north indian music is metrically organized and it is called_ nibaddh_(bound by rhythm ) music .this kind of music is set to a metric framework called _tla_. each _ tla _ is uniquely represented as cyclically recurring patterns of fixed time - lengths . * * _ vart _ : * this recurring cycle of time - lengths in a _tla _ is called _ vart_. _ vart _is used to specify the number of cycles played in a composition , while annotating the composition . * * _ mtra : _ * the overall time - span of each cycle or _vart _ is made up of a certain number of smaller time units called _ mtra_-s .the number of _mtra_-s for the nims _tla_-s , usually varies from to . * * _ vibhga : _ * the _ mtra_-s of a _ tla _ are grouped into sections , sometimes with unequal time - spans , called _ vibhga_-s .+ * * _ bol : _ * in the _ tla _ system of north indian music , the actual articulation of _ tla _ is done by certain syllables which are the mnemonic names of different strokes / pulses corresponding to each _ mtra_. these syllables are called _ bol_-s. there are four types of _ bol_-s as defined below . 1 . * _ sam : _ * the first_ mtr _ of an _ vart _ is referred as _sam _ which is mandatorily stressed [ ] ._ tl - bol _ : * _ tl - bol_-s are usually stressed , whereas _khali_-s are not ._ tl - bol_-s are gestured by the _ tabl _player with claps of the hands , hence are called _sasabda kriya_. the _ sam _ is almost always a _ tl - bol _ for most of the _tla_-s , with only exception of _ rupak _ _ tla _ which designates the _ sam _ with a moderately stressed _bol _ called _khali_(as explained below ) [ ] .+ highly stressed _ vibhga _ boundaries are indicated through the __ tl - bol__s[ ] ._ tl - sam _ is indicated with a ( ) in the rhythm notation of nims .consequent _ tl - vibhga_-boundaries are indicated with .3 . * _ khali - bol _ : * _ khali _ literally means empty and for nims it implies wave of the hand or _nisabda kriya_. moderately stressed _ vibhga _ boundaries are indicated through the _ _ khali - bol__sso we almost never find the _ khali _ applied to strongly stressed _bol_-s like _ sam _ [ ] .+ _ khali - sam _ is indicated with a ( ) in the rhythm notation of nims and consequent _ khali - vibhga_-boundaries are indicated also with .* absent-_bol _ : * sometimes while playing _tabl _ , certain _bol_-s are dropped maintaining the perception of rhythm intact .they are called rests and they have equal duration as a _bol_. we have termed them as absent strokes/_bol_-s .bol_-s are denoted by in the rhythm notation of a nims composition http://www.ancient-future.com/pronuind.html[ancient-future ] . in the figure [ absent ] ,the waveform of absent _ bol _ , denoted by , is shown just after another _ bol ta _ , played in a _ tabl_-solo .+ normally in a nims composition there may be many absent _ bol_-s in the _ thek _ played for the _ tla_. in these cases other percussive instruments(other than _ tabl _ ) and vocal emphasis might generate percussive peaks for the time positions of the absent strokes , depending on the composition , the lyrics being sung and thus the rhythm of the composition is maintained . * * _ thek _ : * for _ tabl _ , the basic characteristic pattern of _ bol_-s that repeats itself cyclically along the progression of the rendering of _ tla _ in a composition , is called _ thek_. in other words it s the most basic cyclic form of the _ tla _ [ ] . naturally _ thek _ corresponds to the basic pattern of _ bol_-s in an _ vart_. the strong starting _sam _ along with the _ tl_-_vibhga_-boundaries in a __ carries the main accent and creates the sensation of cadence and cyclicity .* description of the definitions with an example : * the details of these theories are shown in the structure of a _ tla _ , called _ jhaptal _ in the table [ jhap ] and figure [ fig_jhap ] .the hierarchy of the features and their interdependence are shown in the figure [ heir ] .the cyclic property of _ tla _ is evident here .[ cols="<,<,<,<,<,<,<,<,<,<,<",options="header " , ] overall _ tla _ and tempo detection performance is shown in table [ gross ] .it is clear that the proposed methodology performs satisfactorily and that too with wide variety of data .ll + _ mtr _ detection&tempo detection + 81.59&78.60 +1 . this paper presents the results of analysis of _ tabl _ signal of north indian polyphonic composition , with the help of new technique by extracting the _ bayan _ signal .the justification of using _ bayan _ signal as the guiding signal in case of north indian polyphonic music and detecting _ tla _ using the parameters of nims rhythm , has been clearly discussed .a large number of polyphonic music samples from _ hindi _ vocal songs from _ bhajan _ or devotional , semi - classical and filmy genres were analyzed for studying the effectiveness of the proposed new method .the experimental result of the present investigation clearly supports the pronounced effectiveness of the proposed technique .5 . we would extend this methodology for studying other features(both stationary and non - stationary ) of the all the relevant _ tla_-s of nims and designing an automated rhythm - wise categorization system for polyphonic compositions .this system may be used for content - based music retrieval in nims . also a potential tool in the area of music research and training is expected to come out of it. limitations of the method is that it can not distinguish between _ tla_-s of same _ matr_. for example _ deepchandi _ and _ dhamar _ _ tla_-s have number of _ matr_-s , textitbol - s and beats in a cycle .we plan to extend this elementary model of _ tla_-detection system for all the nims _tla_-s , by including other properties like timbral information and nonlinear properties of different kinds of _ tabl _ strokes/_bol_-s .we may also attempt to transcript the _tla_-_bol_-s in a polyphonic composition . this extended version of the model may address the nims _tla_-s which share same _ matr _ and also have variety of _ lay_-s .we thank the * department of higher education and rabindra bharati university , govt . of west bengal ,india * for logistics support of computational analysis .we also thank renowned musician http://www.subhranilsarkar.com[subhranil sarkar ] for helping us to annotate test data , validate test results and shiraz ray(http://www.dgfoundation.in[deepa ghosh research foundation ] ) for extending help in editing the manuscript to enhance its understandability .nobutaka ono , kenichi miyamoto , j.l.r . , kameoka , h. , sagayama , s. : separation of a monaural audio signal into harmonic / percussive components by complementary diffusion on spectrogram . in : proc . of the eusico european signal processing conf .( 2008 ) b.schuller , eyben , f. , rigoll , g. : fast and robust meter and tempo recognition for the automatic discrimination of ballroom dance styles . in : acoustics , speech and signal processing , 2007 .icassp 2007 .ieee international conference .( 2007 ) 217220 bhaduri , s. , saha , s.k . ,mazumdar , c. : _ matra _ and _ tempo _ detection for indic _ tala_-s . in : advanced computing and informatics proceedings of the second international conference on advanced computing , networking and informatics ( icacni-2014 ) .volume 1 .( 2014 ) 213 220 bhaduri , s. , saha , s.k . ,mazumdar , c. : a novel method for tempo detection of indic _ tala_-s . in : emerging applications of information technology ( eait ) ,2014 fourth international conference of ieee .( 2014 ) 222227 maity , a.k . ,pratihar , r. , mitra , a. , dey , s. , agrawal , v. , sanyal , s. , banerjeeb , a. , sengupta , r. , ghosh , d. : multifractal detrended fluctuation analysis of alpha and theta eeg rhythms with musical stimuli .chaos , solitons and fractals * 81(a ) * ( 2015 ) 5267 uhle , c. , dittmar , c. , sporer , t. : extraction of drum tracks from polyphonic music using independent subspace analysis . in : in proceedings of the 4th international symposium on independent component analysis and blind signal separation ( ica2003 ) .( 2003 ) 843848 yoshii , k. , goto , m. , okuno , h.g . : automatic drum sound description for real - world music using template adaptation and matching methods . in : in proceedings of the international conference on music information retrieval ( ismir ) .( 2004 ) 184191 | in north - indian - music - system(nims),_tabl _ is mostly used as percussive accompaniment for vocal - music in polyphonic - compositions . the human auditory system uses perceptual grouping of musical - elements and easily filters the _ tabl _ component , thereby decoding prominent rhythmic features like _ tla _ , tempo from a polyphonic - composition . for western music , lots of work have been reported for automated drum analysis of polyphonic - composition . however , attempts at computational analysis of _ tla _ by separating the _ tabl_-signal from mixed signal in nims have not been successful . _ tabl _ is played with two components - right and left . the right - hand component has frequency overlap with voice and other instruments . so , _ tla _ analysis of polyphonic - composition , by accurately separating the _ tabl_-signal from the mixture is a baffling task , therefore an area of challenge . in this work we propose a novel technique for successfully detecting _ tla _ using left-_tabl _ signal , producing meaningful results because the left-_tabl _ normally does nt have frequency overlap with voice and other instruments . north - indian - rhythm follows complex cyclic pattern , against linear approach of western - rhythm . we have exploited this cyclic property along with stressed and non - stressed methods of playing _ tabl_-strokes to extract a characteristic pattern from the left-_tabl _ strokes , which , after matching with the grammar of _ tla_-system , determines the _ tla _ and tempo of the composition . a large number of polyphonic(vocal+__tabl__+other - instruments ) compositions has been analyzed with the methodology and the result clearly reveals the effectiveness of proposed techniques . * * keywords:**left-_tabl _ drum , _ tla _ detection , tempo detection , polyphonic composition , cyclic pattern , north indian music system |
the development environment that we have used for the spm simulator program is similar that used to develop our real - time scanning program rtspm. the details can be found in ref ., so we shall only give an outline here .we emphasize that all the software used is open - source . in order to achieve real - time control ,we use a desktop computer running a linux kernel patched with the real time application interface ( rtai). communication with the data acquisition hardware is achieved through the open - source comedi drivers with appropriate real - time extensions that interface well with rtai . for this paper, we used a national instruments pci-6052e card for input and output , although any of the data cards supported by comedi can be used , so long as they meet the required data acquisition rates of the program .the pci-6052e card has analog input and output rates of 333 ks / sec , more than sufficient for our purposes , but we have also used other national instruments data acquisition cards .the real time components of the program are coded in a library as callable subroutines that are written in c using the integrated development environment code::blocks, which is also open - source . the graphical user interface ( gui )is written in free pascal using the open - source integrated development environment lazarus; the real - time parts of the program are called by this main program .figure [ fig1 ] shows a screenshot of the gui of the program , which lets the user choose the channels for the data acquisition inputs and outputs .there are three data acquisition inputs to the program , corresponding to the , and voltage drives of a standard piezotube scanner , which can be selected independently from the gui .these inputs are provided by a spm control program : in our case , this control program is the real - time control program rtspm that we developed earlier. the voltages on the and channels are assumed to correspond directly to the and displacements of the scanner : of course , one can easily implement an appopriate scale factor for each channel independently to model specific hardware if needed .for the voltage , the user can input the scale factor directly .for the data shown in this paper , we use a scale factor of 155 nm / v , corresponding to the scan tube in our physical instrument . for testing purposes , reading of the and channels can be disabled , and the and positions of the scanner can be manually entered from the gui . as we noted earlier ,the real - time part of the program is very simple in concept .once the simulator is started , a real - time loop runs continuously with a loop time of 50 . in the loop itself, the program first determines the and positions of the scanner .if external scanning is enabled , these positions are read directly from the and input channels , otherwise they are taken to be the manually entered values discussed above .depending on the and positions of the scanner , the program then determines what the corresponding topographical height should be . for this paper , we have taken the simplest structure , an array of squares of specified lateral dimension and specified height with a lattice constant of 1 m , although clearly more complicated topographic structures can easily be programmed .the program then reads the channel to determine the height of the scanner .since the program now knows the height of the scanner and the topographic height of any feature at that position directly below it , it can then calculate and output a voltage according to whatever model is being used to generate the tip - sample interaction .this output voltage , which corresponds to the feedback signal input to the spm controller program , is then read by the spm controller program to adjust the piezo voltage accordingly .the process is then repeated on the next 50 cycle .the voltage that the program generates depends on the model of the tip - sample interaction .we discuss below two models that we have implemented , although other tip - sample interactions can also be modeled .our own research is devoted to development of a low temperature scanning probe microscope , so it is natural for us to first try to implement a model for an atomic force microscope .our home - built scanning probe microscope is based on a tuning fork transducer, so the parameters in the model described below will refer to the measured parameters from this instrument , but the program allows these parameters to be modified to match any force transducer .we start by assuming that the interaction between the tip and sample is due to van der waals forces at larger distances with a strong repulsion at short distances. this interaction can be described by a lennard - jones type potential of the form where is the distance between the tip and the sample , and and are constants .the first term describes the strong repulsion between tip and sample at very short distances , and the second describes the relatively short range van der waals interaction .the two unknowns in the potential are the parameters and .we shall use the measured characteristics of the close - approach curve to determine these constants .the force between the tip and the sample corresponding to this potential is to eliminate one of these constants , we specify the position of the minimum of force as a function of as . by setting at ,this allows us to express in terms of and , a tuning fork based afm is operated in non - contact mode .the tuning fork is oscillated at its resonance frequency , which for our tuning forks is close to 32768 hz .the tip - sample interaction modifies the resonant frequency .the shift in resonant frequency can be thought of as arising from a shift in the effective spring constant of the tuning fork where is the spring constant of the tuning fork far from the surface .this is because the resonance frequency is proportional to the square root of the spring constant .we shall use =1800 n / m , corresponding to the spring constants of the tuning forks we normally use .now is given by the derivative of the tip - sample force .\label{eqn5}\ ] ] in non - contact mode , one can measure the amplitude or phase of the oscillation as a feedback signal .however , if the quality factor of the force transducer is large , as it is for the tuning forks , it may take a very long time for these signals to relax to their proper values . hence , it is common to use a phase - locked - loop ( pll ) to stay on the resonance and track the change in resonant frequency as a function of distance .thus , we will use the change in frequency as the feedback signal for our spm controller .the change in frequency is proportional to , which still has one unknown constant ( eqn .[ eqn5 ] ) . since onedirectly measures the frequency - distance curve during close approach , can be determined from this curve .let the total change in frequency between where the tip is far from the sample ( ) and the value of where the frequency shift has its minimum be .( is related to by . )then can be expressed as determining in a real experiment ( and hence the absolute values of or ) is difficult , since is the point at which the tip makes contact with the sample , and one prefers not to crash the tip into the sample . at distances far from the surface ( ) , is ideally 0 : in reality , due to experimental offsets , it may have a small finite value , which we denote .then the value of near the surface at which is by definition , and the minimum of as a function of occurs at .thus by measuring the value of at these two points , one can determine .consequently , by specifying the known or measured quantities , , and , one can determine all the parameters of the model .these quantities are entered in the main program by hand . the frequency shift , and hence the feedback signal that the program generates ,can then be calculated using eqns .( [ eqn4 ] ) , ( [ eqn5 ] ) and ( [ eqn6 ] ) .m etched tungsten wire acting as a tip .the spring constant of the tuning fork is 1800 n / m , and the curve was obtained with the rtspm program .red : approach curve obtained with the simulator program , using a lennard - jones type potential as described in the text ., width=321 ] the black trace in fig .[ fig2 ] shows an example of an experimental close - approach curve measured using our home - built tuning fork scanning probe microscope and the spm control program rtspm . from this curve , we determine hz and nm . the red trace in fig .[ fig2](b ) shows the corresponding curve generated using the spm simulator program , again taken with the spm control program rtspm .it can be seen that the experimental close - approach curve is broader than the simulation based on the lennard - jones potential .we do not know the origin of this discrepancy ; however , it should be noted that the experimental curve was taken with a conducting tip , and hence may be influenced by residual electrostatic interactions , which have a slower power - law dependence .m 12 m forward topographic scan with a resolution of 256 x 156 pixels .each square in the image is 0.5 0.5 m and has a height of 5 nm .( b ) line scan profiles corresponding to the line marked in ( a ) for the forward scan ( black trace ) and the reverse scan ( red trace ) . ,width=321 ] figure [ fig3](a ) shows an image generated by the rtspm program coupled to the spm simulator program .( this figure and other images in this paper are generated using the open source spm analysis program gwyddion. ) as described earlier , the `` sample '' is a computer generated array of squares of side 0.5 m and height 5 nm , separated by a distance of 1 m .the rtspm has a real - time proportional - integral - differential ( pid ) controller that controls the extension of the scan tube based on the desired set point , which is a fixed frequency deviation from the limit .for the image in fig .[ fig3](a ) , the pid parameters used in the rtspm program were , ms , and ms with a set point of 2 hz , which puts us in the hard repulsive region of the approach curve . in this region , a small change in gives rise to a large change in frequency . nevertheless , as can be seen from the line profiles shown in fig .[ fig3](b ) , the topography is accurately mapped by the rtspm program in both the forward and reverse scan directions , with a difference corresponding to about 1 pixel . the scan resolution is 256x256 pixels , and the rtspm program averages for 10 ms on each pixel to reduce the noise , so the image of fig .[ fig3](a ) took approximately 12 minutes to acquire . , but with a non - realtime simulator coded in labview on a computer running windows xp .( b ) line scan profiles corresponding to the line marked in ( a ) for the forward scan ( black trace ) and the reverse scan ( red trace ) . ,width=321 ] in order to illustrate the importance of having a real - time spm simulator program , we have also encoded the same program on a desktop computer running windows xp in national instruments labview without any real - time extensions . in this case , even on a relatively fast computer , the minimum loop time is at best 10 ms .figure [ fig4](a ) shows the image obtained with the rtspm program using the same pid parameters and setpoint as in fig .[ fig3 ] , but coupled to the labview simulator .it can be seen that the image quality is much worse , and the line profiles in fig , [ fig4](b ) show that this is because the position of the scanner does not follow the topography . since all parameters in the rtspm program are the same for figs .[ fig3 ] and [ fig4 ] , the difference is clearly due to the lack of real - time response in the labview program .the van der waals interaction for afm in principle can be extended to other types of tip - sample interactions .for example , for scanning tunneling microscopy , one can model a tunneling current that depends exponentially on the distance between the tip and the sample , and also on bias . for a more sophisticated model, one can think of a conducting `` sample '' with a spatially inhomogenous density of electronic states .even relatively inexpensive modern multicore desktop computers should be able to calculate the feedback response of most tip - sample interaction models in a loop time of 50 . for long range tip - sample interactions ,the models are more complicated .for example , for magnetic force microscopy ( mfm ) on a ferromagnetic film , one must consider not only the magnetic interaction between the magnetic tip and the part of the sample just below the tip , but also the magnetic interaction with other parts of the ferromagnetic film that are much further away . in electrostatic force microscopy ( efm ) , the electrostatic interaction between the tip and the sample is also long range , and an accurate calculation would require numerical techniques .there have been a number of papers that have developed approximations for the tip sample interaction both in the context of efm and scanning capacitance microscopy. however , with some simplifiying assumptions , one can use a simple model that gives a realistic reproduction of an electrostatic image .the assumption that we shall make is that the conducting tip on the scan tube is very close to the conducting surface of the sample , so that the interaction between the tip and the sample can be approximated by the interaction between a small conducting sphere and an infinite conducting plane , where the radius of the small sphere is the same as the radius of the microscope tip .as can be expected , this assumption neglects fringe fields at the edges of a conducting region in the sample , but the result should be accurate beyond a few multiples of the tip - sample distance from any edge .consider then the force between a conducting sphere of radius and an infinite grounded conducting plane .this is a classic problem that can be solved by the method of images. the force between the sphere and conductor is given by where is the voltage difference between the sphere and plane ( which in the case of real metals will also include any contact potentials ) , and is the capacitance between them .since decreases with increasing distance , the force is always attractive .calculation of the force then reduces to calculation of the capacitance , which is given by the series where .the resulting expression for the force is since we would like to calculate the derivative of with respect to to calculate the shift in frequency along the lines of eqns .[ eqn4 ] and [ eqn5 ] , it would be nice to have a simpler equation to work with ._ et al.__ have shown that the following expression closely approximates eqn .[ eqn9 ] , with a maximum error of a few percent v^2 .\label{eqn10}\ ] ] we shall use this equation in our calculations of the frequency shift due to the electrostatic interaction .this frequency shift can be calculated from eqn .[ eqn4 ] and the relation .we obtain ^ 2}\right ] .\label{eqn11}\ ] ] unlike the equation for the van der waals interaction , we note that there are no free parameters in this equation .figure [ fig5 ] shows the close - approach curves generated with the spm simulator program both with and without the electrostatic force . here the radius of the tip is taken to be 20 nm , and the potential difference between the tip and sample is 5 v. as is well known and can be seen from the two curves , the van der waals interaction is dominant close to the surface , at a distance of less than a few nm , while the electrostatic interaction , being longer range , contributes to the attractive potential at larger distances .this forms the basis of two techniques to extract the electrostatic force . in the first technique , for each line of the scan , a forward and reverse scan is made under feedback at a setpoint corresponding to a distance from the surface at which the van der waals force is dominant .the resulting positions of the piezo scan tube reflect primarily the topography of the sample along the scan line .the spm controller then takes the scan tube out of feedback mode , raises the scan tube in the direction by a distance , the lift height , and retraces the forward and reverse traces while recording the frequency shift . since the system is no longer under feedback , should be larger than any topographic features in the sample .the idea behind this constant height technique is that for sufficiently large , the primary contribution to the resulting signal will be from electrostatic forces .one can also retrace the stored positions of the topographic trace at the added height while recording the frequency shift .the hope is that this `` lift mode '' technique will subtract the contribution from the signal due to topography , leaving only the signal due to the electrostatic force . of course, one can also measure other signals such as the phase or amplitude of the oscillation when the system is not in feedback , but we have chosen here to measure the frequency shift as it is conceptually easier to understand .the benefit of modeling the electrostatic interaction is that one can determine immediately what appropriate value of to use from the close - approach curves in fig .if we use a set point of 2 hz for the topographic image , it appears that a lift height of about 8 nm should give the largest electrostatic signal .m x 12 m , and the topographic height of each square is 5 nm .the colour bars are for the -scale ; for the topographic image the units are m , for the efm images the units are hz.,width=321 ] m x 12 m , and the topographic height of each square is 5 nm .the colour bars are for the -scale ; for the topographic image the units are m , for the efm images the units are hz.,width=321 ] in order to model an electrostatic sample , we use the same array of squares described earlier , but assume each alternate square in the array has a potential applied , with the tip and the other squares being grounded .this approximates the standard samples frequently used for efm calibration , which consist of interdigitated metallic fingers which have a potential applied only to every alternate finger , so that one can immediately distinguish between the topographic image and the electrostatic image .figure [ fig6 ] shows three dimensional representations of the resulting topographic and electrostatic images , using constant height mode in the dc efm module of rtspm . for these images, we used a setpoint for the topographic image of 2 hz ( referring to fig .[ fig5 ] ) , a voltage v , and a lift height of = 8 nm .it is clear from the figure that there is substantial leakage of the topographic image into the efm image .this is not surprising , since for a lift height of 8 nm , the height of the tip above each square is only 3 nm so the van der waals force would contribute considerably to the signal . and [ fig7 ] , but with a lift height of 20 nm .the lift mode and constant height mode scan were acquired separately .the colour bars are for the -scale ; for the topographic image the units are m , for the efm images the units are hz.,width=321 ] for comparison , fig . [ fig7 ] shows the corresponding image ( at the same lift height of 8 nm ) obtained using lift mode . while there is some bleeding of the topographic image into the efm image , it is quite small : the ratio of the signal between squares with and without a potential is about a factor of 20 . the maximum frequency shift in fig .[ fig7](b ) is about 300 mhz .clearly , lift mode is the preferable mode of operation when the sample has any appreciable topographical relief . to eliminate any topographic signal in the efm image, one can use a larger lift height .figure [ fig8 ] shows the lift mode and constant height mode efm images for a lift height of 20 nm .while the constant height mode image still shows some hint of the squares without any potential , the lift mode image shows no hint of the topography , but only the electrostatic profile .the overall lift mode signal is reduced in comparison to fig .[ fig7 ] , but not by much ( 80 mhz peak frequency shift ) , a reflection of the long range nature of the electrostatic force . in summary, we have developed a real - time software simulator that models the response of a scanning probe microscope .the simulator is useful for testing scanning probe control software as well as different models for tip sample interactions .text the software is availably freely as a git repository at git://github.com / chandranorth / tfsimulator.git .v. chandrasekhar and m.m .mehta , rev .instrum . *84 * , 013705 ( 2013 ) ._ rtai - the realtime application interface for linux from diapm _ , https://www.rtai.org ._ comedi : linux control and measurement device interface _ , http://www.comedi.org ._ code::blocks _ , http://www.codeblocks.org ._ free pascal : advanced open source pascal compiler for pascal and object pascal _ , http://www.freepascal.org ._ lazarus _ , http://www.lazarus.freepascal.org .y. seo , p. cadden - zimansky , and v. chandrasekhar , applied physics letters * 87 * , 103103 ( 2005 ) ._ gwyddion : free spm data analysis software _ , http://www.gwyddion.net. s. hudlet , m. saint jean , c. guthmann and j. berger , eur .j. b * 2 * , 5 ( 1998 ) .m. saint jean , s. hudlet , c. guthmann and j. berger , j. appl . phys . * 86 * , 5245 ( 1999 ) .s. gmez - moivas , j.j .senz , r. carminati and j.j .greffet , appl .. lett . * 76 * , 2955 ( 2000 ) .y. naitou , a. yasaka and n. ookubo , j. appl . phys . * 105 * , 044311 ( 2009 ) .see , for example , w.r .smythe , _ static and dynamic electricity _ , 2nd edition , page 121 [ mcgraw - hill , new york , 1950 ] . | we describe software that simulates the hardware of a scanning force microscope . the essential feature of the software is its real - time response , which is critical for mimicking the behavior of real scanning probe hardware . the simulator runs on an open - source real time linux kernel , and can be used to test scanning probe microscope control software as well as theoretical models of different types of scanning probe microscopes . we describe the implementation of a tuning - fork based atomic force microscope and a dc electrostatic force microscope , and present representative images obtained from these models . obtaining and interpreting images with a scanning probe microscope is a complicated task , even more so in the case of completely home - built microscopes with both custom electronics and custom software . this is because of the integral relationship between software and hardware in scanning probe microscopes , particularly in the case of more recent instruments , where many of the functions previously performed by analog electronics are now performed in software , either on the control computer itself or on a separate digital signal processor . during development , in the frequent case that something goes wrong , it is not easy to identify whether the problem lies with the software or the hardware . thus , it would be useful to have a diagnostic tool to test whether the control software is indeed performing as designed . the heart of such a software simulator is in principle very simple : one needs only to convert the position of the tip ( as output by the scanning probe control program , perhaps by means of a voltage ) to a voltage corresponding to a feedback signal that can be read back by the control program . this conversion occurs according to some model of the tip - sample interaction , which in turn depends on the type of scanning probe microscopy , be it atomic force microscopy ( afm ) , scanning tunneling microscopy ( stm ) , magnetic force microscopy ( mfm ) or electrostatic force microscopy ( efm ) , to name a few . the difficult part is to do this in a time - critical manner ; the response of the simulator must be as fast or faster than typical scanning probe hardware , which means the response must be in fractions of a millisecond , and the response must be deterministic , in that response loops must be executed when expected . this is usually beyond the capabilities of standard desktop computers running modern operating systems , as these operating systems have preemptive multitasking , making their response times at best on the order of 10 ms , and more critically , not deterministic . the software we describe here is based on an open - source , real - time version of linux running on a standard desktop computer that enables loop times as short as 25 ( more reliably around 50 ) , corresponding to an upper limit frequency response on the order of 20 khz , fast enough to model most commercial and home - built scanning probe electronics . another major advantage of such a software simulator is that one is able to easily test different models of tip - sample interactions as well as the influence of various model parameters on the resulting images . the remainder of the paper is organized as follows : section i gives an overview of the software and hardware development platform . section ii describes the model used to implement an atomic force microscope ( afm ) , and section iii describes the model used to implement an electrostatic force microscope . these are two modes that are important for our own research , and we emphasize that many other types of interactions can also be implemented relatively easily if appropriate theoretical models are available . |
a signal can be transformed from one domain into another in various ways . some signals that cover the whole considered interval in one domain ( where signals are dense in that domain )could be located within much smaller regions in another domain .we say that signals are sparse in a transformation domain if the number of nonzero coefficients is much smaller that the total number of signal samples .for example , a sum of discrete - time complex sinusoidal signals , with a number of components being much lower than the number of signal samples in the time domain , is a sparse signal in the discrete fourier transform ( dft ) domain .sparse signals could be reconstructed from much fewer samples than the sampling theorem requires .compressive sensing is a field dealing with the problem of signal recovery from a reduced set of samples - . as a study case , in this paper we will consider signals that are sparse in the fourier transform domain .signal sparsity in the discrete fourier domain imposes some restrictions on the signal . reducing the number of samples in the analysis manifests as a noise , whose propertiesare studied in and used in to define a reconstruction algorithm .the input noise influence is also an important topic in this analysis since the reduced number of available samples could increase the sensitivity of the recovery results to this noise .additive noise will remain in the resulting transform .however , if a reconstruction algorithm for a sparse signal is used in the reconstruction of nonsparse signal then the noise , due to missing samples , will remain and behave as an additive input noise . a relation for the mean square error of this erroris derived for the partial dft matrix case .it takes into account very important fact that if all samples are available then the error will be zero , for both sparse and nonsparse recovered signals .theory is illustrated and checked on statistical examples .the paper is organised as follows : after the introduction part in section 1 , the definition of sparsity is presented in section 2 . in section 3 ,the reconstruction algorithm is presented for both one step reconstruction and the iterative way . also in section 3, the analysis of the influence of additive noise will be expanded .the reconstruction of nonsparse signals with additive noise is shown in section 4 . in the appendixthe conditions in which the reconstruction of sparse signals is possible in general are presented .consider a signal and its transformation domain coefficients , or where is the transformation matrix with elements , is the signal vector column , and is the transformation coefficients vector column . for the dft .a signal is sparse in the transformation domain if the number of nonzero transform coefficients is much lower than the number of the original signal samples , fig .[ posterslika ] , i.e. , if for the number of nonzero samples is where and is the notation for the number of nonzero transformation coefficients in . counting the nonzero coefficients in a signal representation can be achieved by using the so called -norm denoted by .this form is referred to as the -norm ( norm - zero ) although it does not satisfy norm properties . by definition for and .a signal , whose transformation coefficients are , is sparse in this transformation domain if for linear signal transforms the signal can be written as a linear combination of the sparse domain coefficients a signal sample can be considered as a linear combination ( measurement ) of values .assume that samples of are available only at a reduced set of random positions here is the set of all samples of a signal and is its random subset with elements , .the available signal values are denoted by vector , fig.[posterslika ] , ^{t}.\ ] ] [ ptb ] posterslika.eps the available samples ( measurements of a linear combination of ) defined by ( [ measdft ] ) , for , can be written as a system of equations {c}x(n_{1})\\ x(n_{2})\\ ... \\ x(n_{m } ) \end{array } \right ] = \left [ \begin{array } [ c]{cccc}\psi_{0}(n_{1 } ) & \psi_{1}(n_{1 } ) & & \psi_{n-1}(n_{1})\\ \psi_{0}(n_{2 } ) & \psi_{1}(n_{2 } ) & & \psi_{n-1}(n_{2})\\ ... & ... & & ... \\ \psi_{0}(n_{m } ) & \psi_{1}(n_{m } ) & & \psi_{n-1}(n_{m } ) \end{array } \right ] \left [ \begin{array } [ c]{c}x(0)\\ x(0)\\ ... \\x(n-1 ) \end{array } \right]\ ] ] or where is the matrix of measurements / observations / available signal samples .the fact that the signal is sparse with for is not included in the measurement matrix since the positions of the nonzero values are unknown . if the knowledge that for were included then a reduced observation matrix would be obtained as {c}x(n_{1})\\ x(n_{2})\\ ... \\ x(n_{m } ) \end{array } \right ] = \left [ \begin{array } [ c]{cccc}\psi_{k_{1}}(n_{1 } ) & \psi_{k_{2}}(n_{1 } ) & & \psi_{k_{k}}(n_{1})\\ \psi_{k_{1}}(n_{2 } ) & \psi_{k_{2}}(n_{2 } ) & & \psi_{k_{k}}(n_{2})\\ ... & ... & & ... \\ \psi_{k_{1}}(n_{m } ) & \psi_{k_{2}}(n_{m } ) & & \psi_{k_{k}}(n_{m } ) \end{array } \right ] \left [ \begin{array } [ c]{c}x(k_{1})\\ x(k_{2})\\ ... \\ x(k_{k } ) \end{array } \right]\ ] ] or matrix would be formed if we knew the positions of nonzero samples .it would follow from the measurement matrix by omitting the columns corresponding to the zero - valued coefficients . assuming that there are nonzero coefficients , out of the total number of values , the total number of possible different matrices is equal to the number of combinations with out of .it is equal to .although the -norm can not be used in the direct minimization , the algorithms based on the assumption that some coefficients are equal to zero , and the minimization of the number of remaining nonzero coefficients that can reconstruct sparse signal , may efficiently be used .the reconstruction process can be formulated as finding the positions and the values of nonzero coefficients of a sparse signal ( or all signal values ) using a reduced set of signal values , such that where .consider a discrete - time signal .signal is sparse in a transformation domain defined by the basis functions set , .the number of nonzero transform coefficients is much lower than the number of the original signal samples ,i.e. , for .a signal of sparsity can be reconstructed from samples , where .in the case of signal which is sparse in the transformation domain there are nonzero unknown values , , ... , . other transform coefficients , for * * , ** are zero - valued . just forthe beginning assume that the transformation coefficient positions , , ... , are known .then the minimal number of equations to find the unknown coefficients ( and to calculate signal for any ) is .the equations are written for at least time instants , , where the signal is available / measured , in a matrix form this system of equations is where is the vector of unknown nonzero coefficients values ( at the known positions ) and is the vector of available signal samples , ^{t}\\ \mathbf{y}=[x(n_{1})~~x(n_{2})~~ ... ~~x(n_{m})]^{t}\nonumber\\ \mathbf{a}_{k}=\left [ \begin{array } [ c]{cccc}\psi_{k_{1}}(n_{1 } ) & \psi_{k_{2}}(n_{1 } ) & ... & \psi_{k_{k}}(n_{1})\\ \psi_{k_{1}}(n_{2 } ) & \psi_{k_{2}}(n_{2 } ) & ... & \psi_{k_{k}}(n_{2})\\ ... & ... & ... & .... \\ \psi_{k_{1}}(n_{k } ) & \psi_{k_{2}}(n_{k } ) & ... & \psi_{k_{k}}(n_{k } ) \end{array } \right ] .\label{martr_sampl}\ ] ] matrix is the measurements matrix with the columns corresponding to the zero - valued transform coefficients , , ... , being excluded . for a given set the coefficients reconstruction condition can be easily formulated as the condition that system ( [ susmat ] ) has a ( unique ) solution , i.e. , that there are independent equations , note that this condition does not guarantee that another set can also have a ( unique ) solution , for the same set of available samples .it requires for any submatrix of the measurements matrix system ( [ sist_rj ] ) is used with .its solution , in the mean squared sense , follows from the minimization of the difference of the available signal values and the values produced by inverse transform of the reconstructed coefficients , where or where exponent denotes the hermitian conjugate .the derivative of over a specific coefficient , , is \psi_{p}^{\ast}(n).\ ] ] the minimum of quadratic form error is reached for when in matrix form this system of equations reads its solution is it can be obtained by a symbolic vector derivation of ( [ e_vec_h ] ) as if we do not know the positions of the nonzero values for , , ... , then all possible combinations of , , ... , should be tested .there are of them .it is not a computationally feasible problem .thus we must try to find a method to estimate , , ... , in order to recover values of .solution of the minimization problem , assuming that the positions of the nonzero signal coefficients in the sparse domain are known , is presented in the previous subsection .the next step is to estimate the coefficient positions , using the available samples .a simple way is to try to estimate the positions based on signal samples that are available , ignoring unavailable samples .this kind of transform estimate is where for the dft and . since relation can be written as where is the measurement matrix . with coefficients , calculated with samples , are random variables .note that using ( [ ms_summ ] ) in calculation is the same as assuming that the values of unavailable samples , , is zero .this kind of calculation corresponds to the result that would be achieved for the signal transform if -norm is used in minimization .* algorithm * a simple and computationally efficient algorithm , for signal recovery , can now be implemented as follows : \(i ) calculate the initial transform estimate by using the available / remaining signal values \(ii ) set the transform values to zero at all positions except the highest ones .alternative : \(ii ) set the transform values to zero at all positions where this initial estimate is below a threshold , this criterion is not sensitive to as far as all nonzero positions of the original transform are detected ( is above the threshold ) and the total number of transform values in above the threshold is lower than the number of available samples , i.e. , . all transform values that are zero in the original signal will be found as zero - valued .\(iii ) the unknown nonzero ( including zero - valued ) transform coefficients could be then easily calculated by solving the set of equations for available instants , at the detected nonzero candidate positions , , this system of the form is now reduced to the problem with known positions of nonzero coefficients ( considered in the previous subsection ) .it is solved in the least square sense as ( [ rjesenje ] ) the reconstructed coefficients , , ( denoted by vector ) are exact , for all frequencies . if some transform coefficients , whose true value should be zero , are included ( when ) the resulting system will produce their correct ( zero ) values . *comments : * in general , a simple strategy can be used by assuming that and by setting to zero value only the smallest transform coefficients in . system ( [ sist_rj ] ) is then a system of linear equations with unknown transform values . if the algorithm fails to detect a component the procedure can be repeated after the detected components are reconstructed and removed .this simple strategy is very efficient if there is no input noise .large , close or equal to , will increase the probability that full signal recovery is achieved in one step . in this paper, it will be shown that in the case of an additive ( even small ) input noise in all signal samples , a reduction of the number as close to the true signal sparsity as possible will improve the signal to noise ratio ._ example : _ consider a discrete signal for , sparse in the dft domain since only three dft values are different than zero .assume now that its samples , , , and are not available .we will show that , in this case , the exact dft reconstruction may be achieved by : \(i ) calculating the initial dft estimate by setting unavailable sample values to zero where \(ii ) detecting , for example positions of maximal dft values , , , and , and ( 3 ) calculating the reconstructed dft values at , , and from system where are the instants where the signal is available .the discrete - time signal , with is shown in fig .[ recon_expl_cs ] .the signal is sparse in the dft domain since only three dft values are different than zero ( fig .[ recon_expl_cs ] ( second row ) ) .the cs signal , with missing samples , , , and being set to for the initial dft estimation , is shown in fig .[ recon_expl_cs ] ( third row ) .the dft of the signal , with missing values being set to is calculated and presented in fig .[ recon_expl_cs ] ( fourth row ) .there are three dft values , at , , and above the assumed threshold , for example , at level of .the rest of the dft values is set to .this is justified by using the assumption that the signal is sparse .now , we form a set of equations , for these frequencies , , and as where are the instants where the signal is available . since there are more equations than unknowns , the system is solved using .the obtained reconstructed values are exact , for all frequencies , as in fig .[ recon_expl_cs ] ( second row ) .they are shown in fig .[ recon_expl_cs ] ( fifth row ) .if the threshold was lower , for example at , then six dft values at positions are above the assumed threshold .the system with six unknowns where will produce the same values for , , and while the values will be obtained .if the threshold is high to include the strongest signal component only , then the solution is obtained through an iterative procedure described in the next subsection .[ ptb ] recon_expl_cs.eps if components with very different amplitudes exist and the number of available samples is not large , then the iterative procedure should be used .this procedure could be implemented as follows .the largest component is detected and estimated first .it is subtracted from the signal .the next one is detected and the signal is estimated using the frequency from this and the previous step(s ) .the estimated two components are subtracted from the original signal .the frequency of next components is detected , and the process of estimations and subtractions is continued until the energy of the remaining signal is negligible or bellow an expected additive noise level . *algorithm * \(i ) calculate the initial transform estimate by using the available / remaining signal values set the transform values to zero at all positions except the highest one at , . set the counter to form the matrix using the available samples in time and detected index with one nonzero component .calculate the estimate of the transformation coefficient at calculate the signal estimation ( as the inverse dft) and check if , for example , stop the calculation and use . if not then go to the next step .\(ii ) set the counter to .form a signal at the available sample positions and calculate the transform set the transform values to zero at all positions except the highest one at .form the set of indices , using union of the previous maxima positions and the detected position , as form matrix using the available samples in time and detected indices .calculate the estimate of transformation coefficients calculate the signal and check if , for example , stop the calculation and use else repeat step ( ii ) ._ example : _signal with is shown in fig.[iterativereport_knjiga ] .small number of samples is available with different signal amplitudes , making one - step recovery impossible .the available signal samples are shown in fig.[iterativereport_knjiga ] ( second row , left ) . the iterative procedure is used and , for the detected dft positions during the iterations , the recovered signal is calculated according to the presented algorithm .the recovered dft values in the iteration are denoted as and presented in fig.[iterativereport_knjiga ] .after first iteration the strongest component is detected and its amplitude is estimated . at this stage ,other components behave as noise and make amplitude value inaccurate .accuracy improves as the number of detected components increases in next iterations .after five steps the agreement between the reconstructed signal and the available signal samples was complete .then the algorithm is stopped .the dft of the recovered signal is presented as in the last subplot of fig.[iterativereport_knjiga ] .its agreement with the dft of the original signal , fig.[iterativereport_knjiga ] ( first row , right ) is complete .[ ptb ] iterativereport_knjiga.eps the initial dft calculation in reconstruction algorithms is done assuming zero - valued missing samples .the initial calculation quality has a crucial importance to the successful signal recovery . with a large number ofrandomly positioned missing samples , the missing samples manifest as a noise in this initial transform . once the reconstruction conditions are met for a sparse signal and the exact reconstruction is achieved , the noise due to missing samples does not influence the results in a direct way .it influences the possibility to recover a signal .the accuracy of the recovery results is related to the additive input noise only .the input noise is transformed by the recovery algorithm into a new noise depending on the signal sparsity and the number of available samples .a simple analysis of this form of noise will be presented in the second part of this section .consider a sparse signal in the dft domain with nonzero coefficients at the positions where are the signal component amplitudes .the initial dft is calculated using we can distinguish two cases : \(1 ) for then , with , the value of with random set , for , can be considered as a random variable .its mean over different realizations of available samples ( different realizations of sets ) is .the mean value of is \(2 ) for the mean value of ( [ ms_summm ] ) is the mean value of ( [ ms_summm ] ) for any is of the form its variance is \text{. } \label{sign}\ ] ] the ratio of the signal amplitude and the standard deviation for is crucial parameter ( welsh bound for coherence index of measurement matrix ) for correct signal detection .its value is note that the variance in a multicomponent signal with is a sum of the variances of individual components at all frequencies except at when the values are lower for since all component values are added up in phase , without random variations . according to the central limit theorem , for the real and imaginary parts of the dft value for noise only positions be described by gaussian distribution , with zero - mean and variance .real and imaginary parts of the dft value , at the signal component position , can be described by the gaussian distributions respectively , where , according to ( [ sign ] ) ._ example : _ for a discrete - time signal with , , , , the dft is calculated using a random set of samples . calculation is performed with random realizations with randomly positioned samples and random values of and .histogram of the dft values , at a noise only position and at the signal component position , is presented in fig.[hist_miss_samp ] ( left ) .histogram of the dft real part is shown , along with the corresponding gaussian functions and , shown by green dots . the same calculation is repeated with , fig.[hist_miss_samp ] ( right ) .we can see that the mean value of the gaussian variable can be used for the detection of the signal component position .also the variance is different for noise only and the signal component positions .it can also be used for the signal position detection . in the case with , the histograms are close to each other , meaning that there is a probability that a signal component is missdetected .histograms are well separated in the case when .it means that the signal component will be detected with an extremely high probability in this case .calculation of the detection probability is straightforward with the assumed probability density functions .[ ptb ] hist_miss_samp.eps the spark based relation can be obtained within the framework of the previous analysis if we assume that the noises ( [ noise_epsee ] ) due to missing samples coming from different components of the same ( unity ) amplitude are added up ( equal amplitudes are the worst case for this kind of analysis ) with the same phase to produce , at some frequency . random variable ( since is random ) should also assume its maximal possible value ( calculated over all possible and all possible positions , ) .the maximal possible value of this variable is related to the coherence index of the partial dft matrix defined as it means that maximal possible value of this variable is .it should also be assumed that remaining noise components ( due to missing samples ) at the component position assume the same maximal value and that all of them subtract in phase from the signal mean value at .condition for the correct detection of a component position at is then such that the minimal possible amplitude of the component is greater than the maximal possible noise at , i.e. , or where is the spark of the measurement matrix ( spark of matrix is defined as the smallest number of dependent columns or rows ) . according to many very unlikely assumptions that has been made , we can state that this is a very pessimistic bound for .therefore , for a high degree of randomness , a probabilistic approach may be more suitable for the analysis than the spark based relation .assume an additive noise in the input signal .in a matrix form this system of linear equations with unknowns reads the solution follows for where and are the reconstructed signal and noise components respectively .assume that the reconstruction conditions are met and the positions of nonzero coefficients , , ... , can be determined through a single step or iterative procedure .the equations to find the unknown coefficients are written for time instants , where ^{t}]the matrix is the measurements matrix with the columns corresponding to the zero - valued transform coefficients , , ... , being excluded . for a givenset the coefficients reconstruction condition can be easily calculated as where and is the noise influence to the reconstructed signal coefficients .the input signal - to - noise ( snr ) ratio , if all signal samples were available , is assume the noise energy in available samples used in the reconstruction is correct amplitude in the signal transform at the frequency , in the case if all signal samples were used , would be . to compensate the resulting transform for the known bias in amplitude when only samples are used we should multiply the coefficient by .it means that is a full recovery , a signal transform coefficient should correspond to the coefficient of the original signal with all signal samples being used .the noise in the transform coefficients will also be multiplied by the same factor .therefore , its energy would be increased to .the signal - to - noise ratio in the recovered signal would be if the distribution of noise in the samples used for reconstruction is the same as in other signal samples then and therefore , a signal reconstruction that would be based on the initial estimate ( [ ms_sum ] ) would worsen snr , since .since only out of dft coefficients are used in the reconstruction the energy of the reconstruction error is reduced for the factor of as well .therefore , the energy of noise in the reconstructed signal is the output signal to noise ratio in the reconstructed signal is it is related to the input signal to noise ration as _ example : _ theory is illustrated on a four component noisy signal as well , where , , , and .the signal is reconstructed using iterative calculation to find nonzero coefficients , , ... , and ( [ rec_rel ] ) to find the signal .the results are presented in the table [ tab1 ] .the agreement of the numerical statistical results with this simple theory in analysis of noise influence to the reconstruction of sparse signals is high for all considered .[ c]||crrrr||snr in [ db ] & & & & + & 3.5360 & 3.5326 & 3.5788 & 3.5385 + & 18.5953 & 19.5644 & 20.3562 & 21.0257 + & 18.7203 & 19.5139 & 20.2869 & 21.7302 +according to the results in previous section , the missing samples can be represented by a noise influence .assume that we use a reconstruction algorithm for a signal of sparsity on a signal whose dft coefficients are not sparse ( or not sufficiently sparse ) .denote by the sparse signal with nonzero coefficients equal to the largest coefficients of .suppose that the number of components and the measurements matrix satisfy the reconstruction conditions so that a reconstruction algorithm can detect ( one by one or at once ) largest components ( , , ... ) and perform signal reconstruction to get .the remaining components ( ,, ..., ) will be treated as noise in these largest components .variance of a signal component is after reconstruction the variance is the total energy of noise in the reconstructed largest components will be denoting the energy of remaining signal , when the largest are removed from the original signal , by we get if the signal is sparse , i.e. , , then the same result follows if that is , the error will be zero if a complete dft matrix is used in calculation . using schwartz inequality follows it means that if is minimized then the upper bound of the error is also minimized .based on the previous results we can easily get the following result .consider a signal , with transformation coefficients and unknown sparsity , including the case when the signal is not sparse .assume that time domain signal samples , corrupted with additive white noise with variance , are available .the signal is reconstructed assuming that its sparsity is .denote the reconstructed signal by , set of its nonzero positions by , and the corresponding original signal transform by where for and for .the total error in the reconstructed signal , with respect to the original signal at the same nonzero coefficient positions , is _ example : _ consider a nonsparse signal where are random frequency indices from to . using and the first components of signal are reconstructed .the remaining signal components are considered as disturbance .reconstruction of largest components is done in independent realizations with different frequencies and positions of available samples .the result for in the noise free case , obtained statistically and by using the theorem , is note that the calculation of is simple since we assumed that the amplitudes of disturbing components are coefficients of a geometric series .one realization with is presented in fig .[ input_noise_stat_snr_noise ] .the case when is presented in fig .[ input_noise_stat_snr_noise_k10 ] .red signal ( with dots ) represents the reconstructed signal with assumed sparsity and the signal with black crosses represents the original nonsparse signal .[ ptb ] input_noise_stat_snr_noise.eps [ ptb ] input_noise_stat_snr_noise_k10.eps in the case of additive complex - valued noise of variance the results are the decrease in the snr due to noise is the simulation is repeated with and the same noise .the snr values are and goal of compressive sensing is to reconstruct a sparse signal using a reduced set of available samples .it can be done by minimizing the sparsity measure and available samples .a simple algorithm for signal reconstruction is presented .one step reconstruction and an iterative procedure of the reconstruction algorithm are given .noisy environment is taken into account as well .the input noise can degrade the reconstruction limit .however , as far as the reconstruction is possible , the noise caused by missing samples manifests its influence to the results accuracy in simple and direct way through the number of missing samples and signal sparsity .the accuracy of the final result is related to the input noise intensity , number of available samples and the signal sparsity .a theorem presenting error in the case when the reconstruction algorithm defined for reconstruction of sparse signals are used in for nonsparse signals reconstruction is defined as well . the theory is checked and illustrated on numerical examples .consider an -dimensional vector whose sparsity is and its measurements , where the measurements matrix is an matrix , with .a reconstruction vector can be achieved from a reduced set of samples / measurements using the sparsity measures minimization .the -norm based solution of constrained sparsity measure minimization is the same as the -norm based solution of if the measurements matrix satisfies the restricted isometry property for a sparse vector with a sufficiently small .constant is the energy of columns of measurement matrix . for normalized energy , while for the measurement matrix obtained using rows of the standard dft matrix .if the signal is not sparse then the solution of minimization problem ( [ minl1 ] ) denoted by will satisfy where is sparse signal corresponding to largest values of .if the signal is of sparsity then and . in the case of noisy measurementswhen then where and are constants depending on .for example , with constants are and , .e. j. cands , j. romberg , and t. tao , robust uncertainty principles : exact signal reconstruction from highly incomplete frequency information , _ ieee transactions on information theory _ , vol .52 , no . 2 ,489509 , 2006 .r. e. carrillo , k. e. barner , and t. c. aysal , robust sampling and reconstruction methods for sparse signals in the presence of impulsive noise , _ ieee journal of selected topics in signal processing _, 2010 , 4(2 ) , pp .392408 .m. a. figueiredo , r. d. nowak , and s. j. wright , gradient projection for sparse reconstruction : application to compressed sensing and other inverse problems,_ieee journal of selected topics in signal processing , _ , vol . 1 , no . 4 , pp .586597 , 2007 .b. turlach , on algorithms for solving least squares problems under an l1 penalty or an l1 constraint , _ proc . of the american statistical association ; statistical computing section _ , pp . 25722577 , alexandria , va , 2005 .y. d. zhang and m. g. amin , compressive sensing in nonstationary array processing using bilinear transforms , " in proc. _ ieee sensor array and multichannel signal processing workshop _ , hoboken , nj , june 2012 .l. stankovi , s. stankovi , and m. g. amin , missing samples analysis in signals for applications to l - estimation and compressive sensing , _ signal processing _ , elsevier , volume 94 , jan .2014 , pages 401408 .e. sejdi , a. cam , l. f. chaparro , c. m. steele , and t. chau , compressive sampling of swallowing accelerometry signals using tf dictionaries based on modulated discrete prolate spheroidal sequences , _eurasip journal on advances in signal processing _ , 2012:101 doi:10.1186/168761802012101 .i. daubechies , m. defrise , and c. de mol , an iterative thresholding algorithm for linear inverse problems with a sparsity constraint , _ communications on pure and applied mathematics _ , vol .57 , no . 11 , pp . 14131457 , 2004 .l. stankovi , m. dakovi , s. vujovi , adaptive variable step algorithm for missing samples recovery in sparse signals , _ iet signal processing _ ,vol.8 , no.3 , 2014 , pp.246256 , doi : 10.1049/iet - spr.2013.0385 .s. stankovi , i. orovi , and l. stankovi , `` an automated signal reconstruction method based on analysis of compressive sensed signals in noisy environment '' , _ signal processing _ , elsevier , volume 94 , in print . | signals sparse in a transformation domain can be recovered from a reduced set of randomly positioned samples by using compressive sensing algorithms . simple reconstruction algorithms are presented in the first part of the paper . the missing samples manifest themselves as a noise in this reconstruction . once the reconstruction conditions for a sparse signal are met and the reconstruction is achieved , the noise due to missing samples does not influence the results in a direct way . it influences the possibility to recover a signal only . additive input noise will remain in the resulting reconstructed signal . the accuracy of the recovery results is related to the additive input noise . simple derivation of this relation is presented . if a reconstruction algorithm for a sparse signal is used in the reconstruction of a nonsparse signal then the noise due to missing samples will remain and behave as an additive input noise . an exact relation for the mean square error of this error is derived for the partial dft matrix case in this paper and presented in form of a theorem . it takes into account very important fact that if all samples are available then the error will be zero , for both sparse and nonsparse recovered signals . theory is illustrated and checked on statistical examples . |
energy efficiency is of paramount importance for future communication networks and is a main design target for all 5 g radio access solutions .it refers to an efficient utilization of the available energy and consequently extends the network lifetime and/or reduces the operation cost .specifically , conventional battery - powered communication systems suffer from short lifetime and require periodic replacement or recharging in order to maintain network connectivity .on the other hand , communication systems that are supported by a continuous power supply such as cellular networks require a power grid infrastructure and may result in large energy consumption that will further increase due to the increasing growth of data traffic .the investigation of energy - aware architectures as well as transmission techniques / protocols that prolong the lifetime of the networks or provide significant energy savings has been a hot research area over several years , often under the umbrella of the green radio / communications .due to the limited supply of non - renewable energy resources , recently , there is a lot of interest to integrate the energy harvesting ( eh ) technology to power communication networks .energy harvesting is a new paradigm and allows nodes to harvest energy from natural resources ( i.e. , solar energy , wind , mechanical vibrations etc . ) in order to maintain their operation .related literature concerns the optimization of different network utility functions under various assumptions on the knowledge of the energy profiles .the works in assume that the eh profile is perfectly known at the transmitters and investigate optimal resource allocation techniques for different objective functions and network configurations .on the other hand , the works in adopt a more networking point of view and maximize the performance in terms of stability region by assuming only statistical knowledge of the eh profile .although energy harvesting from natural resources is a promising technology towards fully autonomous and self - sustainable communication networks , it is mainly unstable ( i.e. , weather - dependent ) and thus less efficient for applications with critical quality - of - service ( qos ) requirements .an interesting solution that overcomes the above limitation is to harvest energy from man - made electromagnetic radiation . despite the pioneering work of tesla , who experimentally demonstrated wireless energy transfer ( wet ) in late 19th century , modern wireless communication systems mainly focus on the information content of the radio - frequency ( rf ) radiation , neglecting the energy transported by the signals .recently , there is a lot of interest to exploit rf radiation from energy harvesting perspective and use wireless energy transfer in order to power communication devices .the fundamental block for the implementation of this technology is the rectifying - antenna ( rectenna ) which is a diode - based circuit that converts the rf signals to dc voltage .several rectenna architectures and designs have been proposed in the literature for different systems and frequency bands .an interesting rectenna architecture is reported in , where the authors study a rectenna array in order to further boost the harvesting efficiency .although information theoretic studies ideally assume that a receiver is able to decode information and harvest energy independently from the same signal , this approach is not feasible due to practical limitations . in the seminal work in ,the authors introduce two practical rf energy harvesting mechanisms for `` simultaneous '' information and energy transfer : a ) time switching ( ts ) where dedicated time slots are used either for information transfer or energy harvesting , b ) power splitting ( ps ) where one part of the received signal is used for information decoding , while the other part is used for rf energy harvesting .the employment of the above two practical approaches in different fundamental network structures , is a hot research topic and several recent works appear in the literature . in ,the authors study the problem of beamforming design for a point - to - point multiple - input multiple - output ( mimo ) channel and characterize the rate - energy region for both ts and ps techniques .this work is extended in for the case of an imperfect channel information at the transmitter by using robust optimization tools .the work in investigates the optimal ps rule for a single - input single - output ( siso ) channel in order to achieve different trade - offs between ergodic capacity and average harvested energy .an interesting problem is discussed in , where the downlink of an access point broadcasts energy to several users , which then use the harvested energy for time - division multiple access ( tdma ) uplink transmissions . in ,the authors study a fundamental multi - user multiple - input single - output ( miso ) channel where the single - antenna receivers are characterized by both qos and ps - eh constraints . on the other hand ,cooperative / relay networks is a promising application area for rf energy harvesting , since relay nodes have mainly limited energy reserves and may require external energy assistance .the works in deal with the integration of both ts and ps techniques in various cooperative topologies with / without batteries for energy storage .the simultaneous information / energy transfer for a mimo relay channel with a separated energy harvesting receiver is discussed in .although several studies deal with the analysis of communication networks with rf energy harvesting capabilities , most of existing work refers to specific ( fixed ) single / multiple user network configurations . since harvesting efficiency is associated with the interference and thus the geometric distance between nodes , a fundamental question is to study rf energy harvesting for large - scale networks by taking into account random node locations .stochastic - geometry is a useful theoretical tool in order to model the geometric characteristics of a large - scale network and derive its statistical properties .several works in the literature adopt stochastic - geometry in order to analyze the outage probability performance or the transmission capacity for different conventional ( without harvesting capabilities ) networks e.g. , .large - scale networks with energy harvesting capabilities are studied in for different network topologies and performance metrics .these works model the energy harvesting operation as a stochastic process and mainly refer to energy harvesting from natural resources e.g. , solar , wind , etc .however , few studies analyze the behavior of a rf energy harvesting network from a stochastic - geometry standpoint . in ,the authors study the interaction between primary and cognitive radio networks , where cognitive radio nodes can harvest energy from the primary transmissions , by modeling node locations as poisson point processes ( ppps ) .a cooperative network with multiple transmitter - receiver pairs and a single energy harvesting relay is studied in by taking the spatial randomness of user locations into consideration .the analysis of large - scale rf energy harvesting networks with practical ts / ps techniques , is an open question in the literature . in this paper , we study the performance of a large - scale network with multiple transmitter - receiver pairs , where transmitters are connected to the power grid , while receivers employ the ps technique for rf energy harvesting . by using stochastic - geometry , we model the randomness of node locations and we analyze the fundamental trade - off between outage probability performance and average harvested energy . specifically , we study two main protocols : a ) a non - cooperative protocol , and b ) a cooperative protocol with orthogonal relay assistance . in the non - cooperative protocol ,each transmitter simultaneously transfers information and energy at the associated receiver via the direct link .the outage probability of the system as well as the average harvested energy are derived in closed form in function of the power splitting ratio .in addition , an optimization problem which minimizes the transmitted power under some well - defined outage probability and average harvesting constraints , is discussed and closed form solutions are provided .the cooperative protocol is introduced to show that relaying can significantly improve the performance of the system and achieve a better trade - off between outage probability performance and energy harvesting transfer .relaying cooperation is integrated in several systems and standards in order to provide different levels of assistance ( i.e. , cooperative diversity , energy savings , secrecy etc ) ; in this work , relays are used in order to facilitate the information / energy transfer . for the cooperative protocol, we introduce a set of potential dynamic - and - forward ( df ) relays , which are randomly distributed in the network according to a ppp ; these relays have similar characteristics with the transmitters and are also connected to the power grid . in this case , information and energy can be received at each destination via two independent paths ( in case of cooperation ) . for the relay selection ,we adopt a random selection policy based on a sectorized selection area with central angle at the direction of each receiver .the outage performance of the system for a selection combining ( sc ) scheme as well as the average harvested energy are analyzed in closed form and validate the cooperative diversity benefits .numerical results for different parameter set - up reveal some important observations about the impact of the central angle and relay density on the trade - off between information and energy transfer .it is the first time , to the best of the authors knowledge , that stochastic - geometry is used in order to analyze a ps energy harvesting network with / without relaying .the remainder of this paper is organized as follows .section [ system ] describes the system model and introduces the considered performance / harvesting metrics .section [ direct_link ] presents the non - cooperative protocol and analyzes its performance in terms of outage probability and average harvested energy .section [ system2 ] introduces the cooperative protocol and analyzes both performance metrics considered .simulation results are presented in section [ num ] , followed by our conclusions in section [ conc ] . denotes the -dimensional euclidean space , denotes the indicator function , is the lebesgue measure , denotes a two dimensional disk of radius centered at , denotes the euclidean norm of , denotes the probability of the event and represents the expectation operator .in addition , the number of points in is denoted by .we consider a 2-d large - scale wireless network consisting of a random number of transmitter - receiver pairs .the transmitters form an independent homogeneous ppp with of intensity on the plane , where denotes the coordinates of the node .each transmitter has a unique receiver ( not a part of ) at an euclidean distance in some random direction .all nodes are equipped with single antennas and have equivalent characteristics and computation capabilities .the time is considered to be slotted and in each time slot all the sources are active without any coordination or scheduling process . in the considered topology, we add a transmitter at the origin ] without loss of generality ; in this paper , we analyze the performance of this typical communication link but our results hold for any node in the process according to slivnyak s theorem . we assume a _ partial fading _ channel model , where desired direct links are subject to both small - scale fading and large - scale path loss , while interference links are dominated by the path - loss effects . according to the literature , this channel model is denoted as `` 1/0 fading '' and serves as a useful guideline for more practical configurations e.g. , all links are subject to fading .more specifically , the fading between and is rayleigh distributed so the power of the channel fading is an exponential random variable with unit variance .the path - loss model assumes that the received power is proportional to where is the euclidean distance between the transmitter and the receiver , denotes the path - loss exponent and we define .the euclidean distance between two nodes is defined as where the parameter refers to the minimum possible path - loss degradation and ensures the accuracy of our path - loss model for short distances .the instantaneous fading channels are known only at the receivers in order to perform coherent detection .in addition , all wireless links exhibit additive white gaussian noise ( awgn ) with variance .the transmitters are continuously connected to a power supply ( e.g. , battery or power grid ) and transmit with the same power . on the other hand, each receiver has rf energy harvesting capabilities and can harvest energy from the received electromagnetic radiation .the rf energy harvesting process is based on the ps technique and therefore each receiver splits its received signal in two parts a ) one part is converted to a baseband signal for further signal processing and data detection ( information decoding ) and b ) the other part is driven to the rectenna for conversion to dc voltage and energy storage .let denote the power splitting parameter for each receiver ; this means that of the received power is used for data detection while the remaining amount is the input to the rf - eh circuitry .we assume an ideal power splitter at each receiver without power loss or noise degradation , and that the receivers can perfectly synchronize their operations with the transmitters based on a given power splitting ratio . during the baseband conversion phase , additional circuit noise , ,is present due to phase - offsets and circuits non - linearities and which is modeled as awgn with zero mean and variance .based on the ps technique considered , the signal - to - interference - plus - noise ratio ( sinr ) at the typical receiver can be written as where denotes the total ( normalized ) interference at the typical receiver with and denotes the channel power gain for the link .a successful decoding requires that the received sinr is at least equal to a detection threshold . on the other hand ,rf energy harvesting is a long term operation and is expressed in terms of average harvested energy .since of the received energy is used for rectification , the average energy harvesting at the typical receiver is expressed as \bigg ) , \label{mean}\end{aligned}\ ] ] where ] , and . the asymptotic expression inis involved in the optimization problem and determines its feasibility .more specifically , if the outage probability floor in is higher than the outage probability constraint , there is not any transmitted power that can satisfy and therefore the optimization problem becomes infeasible .the constant power splitting case corresponds to a low implementation complexity and is appropriate for ( legacy ) systems where the rectenna s design is predefined and the power splitting parameter is not adaptable . for the general case , where both and are adjustable, it can be easily seen that the two main constraints are binding at the solution can further be reduced . by examining the cases where one constraint is binding and the other holds with inequality , we show that at the optimal solution the inequality constraint holds with equality . ] . in this case, the optimization problem is transformed to the solution of a standard quadratic equation and for the solution is given by we note that the optimization problem is infeasible for .the general optimization problem requires adaptive and dynamic rf power splitting and therefore refers to a higher implementation complexity .the implementation problem can be solved either by a central controller or in a distributed fashion . in the first case ,a central unit that controls the network , solves the problem and broadcasts the common solution ( transmitted power , power splitting ratio ) to all nodes ; the transmitters and the receivers adjust their transmitted power and the power splitting ratio , respectively . in the distributed implementation, each node can locally solve the optimization problem without requiring external signaling ( but with the cost of a higher computational complexity ) .the optimization problem involves only deterministic and average system parameters such as geometric distances , network density , path - loss exponent and channel statistics ; these parameters are estimated at the beginning of the communication and remain constant for a long operation time .is the radius of the selection sector . ]the cooperative scheme exploits the relaying / cooperative concept in order to combat fading and path - loss degradation effects .the network topology considered is modified by adding a group of single - antenna df relays , which have not their own traffic and are dedicated to assist the transmitters .[ model2 ] schematically presents the network topology for the cooperative protocol .the location of all relay nodes are modeled as a homogeneous ppp denoted by with density ; this assumption refers to mobile relays where their position as well as their `` availability '' changes with the time .the relay nodes are also continuously connected to a power supply ( e.g. , battery ) and have equivalent computation / energy capabilities .we adopt an orthogonal relaying protocol where cooperation is performed in two orthogonal time slots .it is worth noting that although several cooperative schemes have been proposed in the literature e.g. , , the orthogonal relaying protocol has a low complexity and is sufficient for the purposes of this work .the cooperative protocol operates as follows : 1 .the first phase of the protocol is similar to the non - cooperative scheme and thus all transmitters simultaneously broadcast their signals towards the associated receivers .each transmitter defines a 2-d relaying area around its location and each relay node located inside this area is dedicated to assist this transmitter ; this means that all relays consider the signal generated by as a useful information and all the other signals as interference . in accordance to the general system model, we assume that direct links suffer from both small - scale fading and path - loss , while interference links are dominated by the path - loss attenuation ( 1/0 partial fading ) . by focusing our study on the typical transmitter , we define as the power fading gain for the link with . in this case , the direct link is characterized by , , while the sinr at the relay is written as + + where denotes the total ( normalized ) interference received at . if the relay node can decode the transmitted signal , which means that , it becomes a member of the transmitter s potential relay .it is worth noting that the relay nodes use all the received signal for information decoding , since they have not energy harvesting requirements .2 . in the second phase of the protocol , one relay node ( if any ) that successfully decoded the transmitted signal accesses the channel and retransmits the source s signal .we assume a _random selection _process which selects a single relay out of all potential relays with equal probability .the random relay selection does not require any instantaneous channel feedback or any instantaneous knowledge of the geometry and is appropriate for low complexity implementations with strict energy constraints .more sophisticated relay selection policies , which take into account the instantaneous channel conditions , can also be considered in order to further improve the cooperative benefits .we define as the selected relay for the typical transmitter , is the channel power gain for the link and is the homogeneous ppp that contains all the selected relays for whole network . if the potential relay set is empty for a specific transmitter ( no relay was able to decode the source s transmitted signal ) , its message is not transmitted during the second phase of the protocol and therefore does not enjoy cooperative diversity benefits .the relaying link for the ( typical ) receiver is characterized by the following equations + \bigg ) , \label{mean2}\end{aligned}\ ] ] + where denotes the total ( normalized ) interference at the receiver , is the power splitting ratio used in the second phase of the protocol , is the transmitted power for each active relay and the binary variable is equal to one in case of a relaying transmission , while it takes the value zero when the relay set is empty .a perfect synchronization between the selected relay and the associated receiver is assumed for a given power splitting ratio . as for the decoding process at the receivers , we assume that the two copies of the transmitted signal are combined with a simple sc technique ; this means that information decoding is based on the best path between direct / relaying links .it is well known that sc only requires relative sinr measurements and thus it is simpler than maximum ratio combiner , which requires exact knowledge of the channel state information for each diversity branch . in addition , sc significantly reduces power consumption because continuous estimates of the channel state information are not necessary ; this is beneficial for the considered rf energy harvesting system , where energy saving is a critical requirement e.g. , . regarding the potential use of the non - selected branch for rf energy harvesting , here , we assume a simple / conventional implementation and the received energy allocated to the sc can not be used for rf energy harvesting purposes .on the other hand , the energy harvesting process exploits both transmission phases and the total average energy harvested becomes equal to the considered random relay selection process does not require any instantaneous channel feedback and is appropriate for scenarios with critical energy / computation constraints. however , the definition of the selection area has a significant impact on the system performance . in this work , we assume that is a circular sector and , circular discs around the different transmitters do not overlap at most of the time . in our case, we have circular sectors and therefore the probability of overlapping becomes much lower .] with center , radius and central angle with orientation at the direction of the receiver . by appropriately adjusting the central angle of the sector, we can ensure that the relaying paths are shorter than the direct distance ; this parameterization avoids scenarios where the selected relay experiences more serious path - loss effects than the direct link .it is proven in appendix [ angle_sec ] that a selection area ,\ ; \theta \in [ -\theta_0\;\theta_0]\big\} ] denotes the integration area , follows from the probability generating functional of a ppp ( * ? ? ?4.6 ) , by using the transformation , by using the transformation , and from integration by parts ; is the lower incomplete gamma function .the outage probability for the typical transmitter - receiver link can be written as where follows from the cumulative distribution function of an exponential random variable with unit variance and the laplace transform in is given by .it is worth noting that although the above analytical method is similar to several stochastic geometry works e.g. , , our analysis / result concerns a different problem and is based on different system assumptions .let be a ppp with density and let ; by using campbell s theorem for the expectation of a sum over a point process , we have : where .we define as the angle which is formed by the relay node , the transmitter and the receiver , and , as depicted in fig .[ model2 ] . by using the cosine rule ,the requirement that the relay - receiver distance should be shorter than gives : .\end{aligned}\ ] ] in the case where the selection area is a sector with a constant central angle , by applying the above condition to the border of the sector ( i.e. , for a distance ) , we have .\end{aligned}\ ] ] it is worth noting that the above condition gives the maximum range of the angle ; any angle defined in this range , it also supports the distance requirement .let be the distance between transmitter and relay .the relay nodes that are able to successfully decode the source s signal form the point process , which is generated by the homogeneous ppp process by applying a thinning procedure ; therefore is a ppp with intensity where for the above expression we have used the expression in proposition [ prop1 ] for a direct distance equal to and . if we focus on the typical transmitter , the mean of inside the area is equal to \lambda_r \exp\left(-\frac{\sigma^2 \omega r_0^{\alpha}}{p_t } \right)\xi(\lambda , r_0,r_0 ) \nonumber \\ & = \int_{-\theta_0}^{+\theta_0}\!\!\int_{r_0}^{\eta } \lambda_r \exp\left(-\frac{\sigma^2 \omega r^{\alpha}}{p_t } \right)\xi(\lambda , r , r_0)r dr d\theta \nonumber \\ & \;\;\;\;\;+\lambda_r \theta_0 r_0 ^ 2 \exp\left(-\frac{\sigma^2 \omega r_0^{\alpha}}{p_t } \right)\xi(\lambda , r_0,r_0)\end{aligned}\ ] ] by using fundamental properties of a ppp process , the probability to have an empty relaying set is equal to using the cosine rule , the distance relay - receiver can be expressed as , where denotes the distance transmitter - relay ( see also fig . [ model2 ] ) . in the case of a relaying transmission ,the interference at each receiver is generated by all selected relays which form a ppp with density ( i.e. , one relay is selected for each transmitter with probability ) .for the outage probability of the relaying link , we can apply the derived expressions for the direct link as follows where gives the area of .we note that the above expression takes into account that the smallest distance between a communication pair is ; the points of with are considered to have a distance according to the considered radio propagation model in .let the distance relay - receiver and the distance transmitter - relay ; by using the cosine rule ( see also fig . [ model2 ] ) , we have .for a selection area ,\;\theta\in [ -\theta_0,\;\theta_0 ] \}$ ] , the average relay - receiver attenuation can be expressed as o. ozel , k. tutuncuoglu , j. yang , s. ulukus and a. yener , `` transmission with energy harvesting nodes in fading wireless channels : optimal policies , '' _ ieee j. select .areas commun .17321743 , sept .o. orhan , d. gunduz , and e. erkip , `` throughput maximization for an energy harvesting communication system with processing cost , '' in _ proc .. theory work ._ , lausane , switzerland , sept .2012 , pp . 8488 .j. jeon and a. ephremides , `` the stability region of random multiple access under stochastic energy harvesting , '' in _ proc .inf . theory _ , saint petersburg , russia , july 2011 , pp .17961800 .a. a. nasir , x. zhou , s. durrani , and r. a. kennedy , `` relaying protocols for wireless energy harvesting and information processing , '' _ ieee trans .wireless commun .36223636 , july 2013 .z. ding , s. m. perlaza , i. esnaola , and h. v. poor , `` power allocation strategies in energy harvesting wireless cooperative networks , '' _ ieee trans .wireless commun ._ , accepted for publication , july 2013 .[ available online:]http://arxiv.org / pdf/1307.1630v2.pdf d. s. michalopoulos , h. a. suraweera , and r. schober , `` relay selection for simultaneous information and wireless energy transfer : a tradeoff perspective , '' _ ieee trans . wireless commun ._ , submitted for publication , march 2013 .[ online available ] : http://arxiv.org/pdf/1303.1647.pdf b. k. chalise , w -k .ma , y. d. zhang , h. a. suraweera , and m. g. amin , `` optimum performance boundaries of ostbc based af - mimo relay system with energy harvesting receiver , '' _ ieee trans .61 , pp . 41994213 , sept .2013 .m. haenggi , j. g. andrews , f. baccelli , o. dousse , and m. franceschetti , `` stochastic geometry and random graphs for the analysis and design of wireless networks , '' _ ieee j. selec .areas commun .10291046 , sept .2009 .z. shens , d. l. goeckel . k. k. leung , and z. ding , `` a stochastic geometry approach to transmission capacity in wireless cooperative networks , '' in _ proc .ind . mob ._ , tokyo , japan , sept .2009 , pp .622626 .r. vaze , `` transmission capacity of wireless ad hoc networks with energy harvesting nodes , '' _ ieee trans .inf . theory _ , submitted for publication , may 2013 .[ available online : ] http://arxiv.org/pdf/1205.5649v1.pdf h. s. dhillon ,li , p. nuggehalli , z. pi , and j. g. andrews , `` fundamental of heterogeneous cellular networks with energy harvesting , '' _ ieee trans .wireless commun ._ , submitted for publication , july 2013 .[ available online : ] http://arxiv.org/pdf/1307.1524v1.pdf k. song , j. lee , s. park , and d. hong , `` spectrum - efficient operating policy for energy - harvesting clustered wireless networks , '' in _ proc .ieee pers .ind . mob .radio commun ._ , london , uk , sept .2013 , pp . 23932397 .h. wang , s. ma , t. -s .ng , and h. v. poor , `` a general analytical approach for opportunistic cooperative systems with spatially random relays '' , _ ieee trans .wireless commun .41224129 , dec . 2011 .y. a. chau and k. y. huang , `` channel statistics and performance of cooperative selection diversity with dual - hop amplify - and - forward relay over rayleigh fading channels , '' _ ieee trans .wireless commun ._ , vol . 7 , pp . 17791785 , may 2008 .ioannis krikidis ( s03-m07-sm12 ) received the diploma in computer engineering from the computer engineering and informatics department ( ceid ) of the university of patras , greece , in 2000 , and the m.sc and ph.d degrees from ecole nationale suprieure des tlcommunications ( enst ) , paris , france , in 2001 and 2005 , respectively , all in electrical engineering . from 2006 to 2007he worked , as a post - doctoral researcher , with enst , paris , france , and from 2007 to 2010 he was a research fellow in the school of engineering and electronics at the university of edinburgh , edinburgh , uk .he has held also research positions at the department of electrical engineering , university of notre dame ; the department of electrical and computer engineering , university of maryland ; the interdisciplinary centre for security , reliability and trust , university of luxembourg ; and the department of electrical and electronic engineering , niigata university , japan .he is currently an assistant professor at the department of electrical and computer engineering , university of cyprus , nicosia , cyprus .his current research interests include information theory , wireless communications , cooperative communications , cognitive radio and secrecy communications .krikidis serves as an associate editor for the ieee wireless communications letters , ieee transactions on vehicular technology and elsevier transactions on emerging telecommunications technologies .he was the technical program co - chair for the ieee international symposium on signal processing and information technology 2013 .he received an ieee communications letters and an ieee wireless communications letters exemplary reviewer certificate in 2012 .he was the recipient of the _ research award young researcher _ from the research promotion foundation , cyprus , in 2013 . | energy harvesting ( eh ) from ambient radio - frequency ( rf ) electromagnetic waves is an efficient solution for fully autonomous and sustainable communication networks . most of the related works presented in the literature are based on specific ( and small - scale ) network structures , which although give useful insights on the potential benefits of the rf - eh technology , can not characterize the performance of general networks . in this paper , we adopt a large - scale approach of the rf - eh technology and we characterize the performance of a network with random number of transmitter - receiver pairs by using stochastic - geometry tools . specifically , we analyze the outage probability performance and the average harvested energy , when receivers employ power splitting ( ps ) technique for `` simultaneous '' information and energy transfer . a non - cooperative scheme , where information / energy are conveyed only via direct links , is firstly considered and the outage performance of the system as well as the average harvested energy are derived in closed form in function of the power splitting . for this protocol , an interesting optimization problem which minimizes the transmitted power under outage probability and harvesting constraints , is formulated and solved in closed form . in addition , we study a cooperative protocol where sources transmissions are supported by a random number of potential relays that are randomly distributed into the network . in this case , information / energy can be received at each destination via two independent and orthogonal paths ( in case of relaying ) . we characterize both performance metrics , when a selection combining scheme is applied at the receivers and a single relay is randomly selected for cooperative diversity . rf energy harvesting , stochastic geometry , poisson point process , relay channel , power consumption , outage probability . |
today s widespread incidences of cyber - attacks ( e.g. , most recently of the us democratic party and at companies such as sony , verizon , yahoo , target , jp morgan , office of personnel management , ashley madison ) , each more audacious than earlier ones , makes them perhaps the primary threat faced by individuals , organizations , and nations alike . consequences and implications of cyber - attacks include monetary losses , threats to critical infrastructure and national security , disruptions to daily life , a potential to cause loss of life and physical property , and data leaks exposing sensitive personal information about users and their activities .the largely quasi - static and unadaptable nature of existing cyber - defenses makes them vulnerable to rapidly evolving attack mechanisms , and thus engender not just a ` warning deficit ' but also a ` detection deficit ' , i.e. , an increase in the mean time between the time of an attack and its discovery .it has been well argued that , because news about an organization s compromise sometimes originates _ outside _ the organization , one could use open source indicators ( e.g. , news and social media ) as indicators of a cyber - attack . social media , in particular , turns users into social sensors empowering them to participate in an online ecosystem of event detection for happenings such as disease outbreaks , civil unrest , and earthquakes . while the use of social media can not fully supplant the need for internal telemetry for certain types of attacks ( e.g. , use of network flow data to detect malicious network behavior ) , analysis of such online media can provide insight into a broader range of cyber - attacks such as data breaches , account hijacking and newer ones as they emerge . at the same time it is non - trivial to harness social media to identify cyber - attacks . *our objective is to detect a range of different cyber - attacks as early as possible , determine their characteristics ( e.g. , the target , the type of attack ) , in an unsupervised manner . *prior work ( e.g. , ) relies on weak supervision techniques which will be unable to capture the dynamically evolving nature of cyber - attacks over time and are also unable to encode characteristics of detected events , as we aim to do here .our main contributions are : * * a framework for cybersecurity event detection based on online social media . *our dynamic event trigger expansion ( dete ) approach uses a limited , fixed , set of general seed event triggers and learns to map them to specific event - related expansions and thus provide situational awareness into cyber - events in an unsupervised manner . * * a novel query expansion strategy based on dependency tree patterns . * to model typical reporting structure in how cyber - attacks are described in social media ,we propose a dynamic event trigger expansion method based on convolution kernels and dependency parses .the proposed approach also employs a word embedding strategy to capture similarities between event triggers and candidate event reports . * * extensive empirical evaluation for three kinds of cyber - attacks*. we manually catalog ground truth for three event classes distributed denial of service ( ddos ) attacks , data breaches , and account hijacking and demonstrate that our approach consistently identifies and encodes events outperforming existing methods .the input to our methodology is a collection of time - ordered tweets organized along time slots .let denote the tweet space corresponding to a subcollection , let denote the target tweet subspace ( in our case , comprising cyber - attack events ) , and let denote the rest of the tweets in the considered tweet space . * typed dependency query * : a * _ typed dependency query _ * is a linguistic structure that characterizes a semantically coherent event related topic .different from n - grams , terms contained in a * _ typed dependency query _ * share both syntactic and semantic relationships .mathematically , a * _ typed dependency query _ * is formulated as a tree structure , where node can be either a unigram , user mention , or a hashtag and represents a syntactic relation between two nodes . * seed query * : a * _ seed query _ * is a manually selected typed dependency query targeted for a certain type of event .for instance , `` hacked account '' can be defined as a potential * _ seed query _ * for an account hijacking event .* expanded query * : an * _ expanded query _ * is a typed dependency query which is automatically generated by the dete algorithm based on a set of seed queries and a given tweet collection . * _ expanded query _ * and its seed query can be two different descriptions of the same subject . more commonly , an * _ expanded query _ * can be more specific than its seed query .for instance , `` prime minister dmitry medvedev twitter account hack '' , an expanded query from `` hacked account '' , denotes the message of an account hijacking event related with dmitry medvedev .* event representation * : an event is defined as , where is the set of event - related expanded queries , denotes when the event happens , and refers to the category of the cyber - attack event ( i.e. , ddos , account hijacking , or data breach ) .here is a defined as a set because , in general , a cyber - attack event can be presented and retrieved by multiple query templates .for instance , among online discussion and report about event `` fashola s account , website hacked '' , the query template most used are `` fashola twitter account hack '' , `` fashola n78 m website twitter account hack '' and `` hack account '' . given the above definitions , the major tasks underlying the cyber - attack event detection problem are defined as follows : * task 1 : * * target domain generation : * given a tweet subcollection , * _ target domain generation _ * is the task of identifying the set of target related tweets . contains critical target related information based on which the expanded query can be mined .* task 2 : * * expanded query extraction : * given target domain , the task of * _ expanded query extraction _ * is to generate a set of expanded queries which represents the generic concept delivered by . thus set can be used to retrieve event related information from other collection sets .* task 3 : * * dynamic typed query expansion : * given a small set of seed queries and a twitter collection , the task of * _ dynamic typed query expansion _ * is to iteratively expand and until all the target related messages are included .in traditional information extraction ( ie ) , a large corpus of text must first be annotated to train extractors for event triggers , defined as main keywords indicating an event occurrence . however , in our scenario using online social media , a manually annotated label set is impractical due to the huge volume of online media and the generally noisy characteristics of the text . in this section ,we propose a novel method to automatically mine query templates over which the event tracking is performed . in this subsection, we propose the method of target domain generation , which serves as the source of social indicators for the detection of ongoing cyber - attack events . given a query and a collection of tweets , the typical way to retrieve query - related documentation is based on a bag of words model which comes with its attendant disadvantages . consider the following two tweets : `` has riseups servers been compromised or * data leaked * ? '' and `` you completely screwed me over ! my phones back on , still * leaking data * and you are so unhelpful # cancellingcontract # bye '' . though the important indicator `` leak data '' for data breach attack is involved in both tweets , the second tweet is complaining about a phone carrier and is unexpected noise in our case . to address this problem ,syntactically bound information and semantic similarity constraints are jointly considered in our proposed method . more specifically , each tweet in is first converted into its dependency tree form .thus for a given seed query , the target domain can be generated by collecting all tweets which are both syntactically and semantically similar to the seed query .mathematically , given two dependency trees and , a convolution tree kernel is adopted to measure the similarity using shared longest common paths : where and are two nodes from two trees and respectively , represents set of positive real numbers , is the indicator function and counts the number of common paths between the two trees which peak at and , which can be calculated by an efficient algorithm proposed by kate et al . , as described in algorithm [ alg : cpp ] .set set set in algorithm [ alg : cpp ] , is the number of common paths between the two trees which originate from and , and can be recursively defined as : where denotes the set of children node . in both algorithm [ alg : cpp ] and equation [ equ : cdp ] , we use the semantic similarity operator , introduced to consider the semantic similarity of tree structre .this semantic similarity is computed by considering cosine similarity of word embeddings vector generated from the word2vec algorithm .this model considers the common paths which are linguistically meaningful , which reduces the noise introduced by coincidentally matched word chains .in addition , long - range dependencies between words , which decreases the performance , are avoided because functionally related words are always directly linked in a dependency tree . in this subsection, we propose a way to dynamically mine an expanded query given a small collect of seed query , as shown in table [ tab : seedquery ] . by providing a small set of seed queries ( unigrams ) , zhao et al . proposed a dynamic query expansion ( dqe ) method which is able to iteratively expand the seed query set from currently selected target tweet subspace until convergence . looking beyond the simple unigrams based expansion , by introducing dependency - based tree structure extraction, we build a dynamic expanded query generation model for the cyber - attack detection task . .seedqueries for cyber - attack events . [ cols="^,<",options="header " , ] 0.40 0.58 we comprehensively show in fig .[ fig : dqestreamgraph2014 ] and fig .[ fig : dqestreamgraph2016 ] , the wide range of events that our system is able to detect .notice , the clear burst in twitter activity that our query expansion algorithm is able to pick . through the following case studies we highlight some of the interesting cases for each of three cyber attack types , that our system detected .* targeted ddos attacks on sony and dyn * : in late , november 2014 , a hacker group calling itself `` the guardians of peace '' hacked their way into sony pictures , leaving the sony network crippled for days , allegedly perpertrated by north korea .we capture 12 separate events of ddos attacks including four in last week of august 2014 , starting with the first on august 24th . further in 2015 , more ensuing attacks are captured one highlighted by the data breach of their movie production house , on december 12th and then a massively crippling targeted , ddos attack on their playstation network in late december , 2015 .another noteworthy case of ddos attacks in 2016 , is the multiple distributed denial - of - service attack on dns provider `` dyn '' from october 21st through 31st in 2016 , that almost caused an worldwide internet outage .our system detects generates several query expansions , shown in fig .[ fig : dyndns ] which clearly characterizes the nature of these ddos attacks where the hackers turned a large number of internet - connected devices around the world in to botnets executing a distributed attack . *ashley madison website data breach : * in july 2015 , a group calling itself `` the impact team '' stole the user data of ashley madison , an adult dating website billed as enabling extramarital affairs .the hackers stole the website all customer data and threatened to release the personally identifying information if the site was not immediately shut down . on 18 and 20august , the group leaked more than 25 gigabytes of company data , including user details .we are able to detect this data breach on july 20 , 2015 .the word clouds in fig .[ fig : ashleymadison ] clearly show how our method iteratively expands from the seed queries to the expanded queries in the last , iteration 3 capturing a very rich semantic aspect of the breach .after the initial burst as seen in the figure , we also see a second corresponding burst a month later , on august 20 when the user data is released anot now the top query expansion captured characterized by the mentions of user data leak of the same website .* twitter account hijackings * : we were also able to detect with very high date accuracy , several high profile cases of account hijackings of social media accounts of known personalities and government institutions including the twitter account for u.s .central command which was hacked by isis sympathizers on january 12 , 2015 .we show in fig .[ fig : uscentcom ] that our method not only identifies the victim ( `` central command twitter account hack '' ) but also the actor who perpetrated the hacking ( `` isis hack twitter account '' ) .[ sub : case_studies ]* cyberattack detection and characterization . * detecting and characterizing cyber attacks is highly challenging due to the constant - involving nature of cyber criminals .recent proposals cover a large range of different methods , and table [ tab : related_work ] lists representative works in this space .earlier work primarily focuses on mining network traffic data for intrusion detection .specific techniques range from classifying malicious network flows to anomaly detection in graphs to detect malicious servers and connections .more recently , researchers seek to move ahead to predict cyber attacks before they happened for early notifications . for example , liu et al . leverage various network data associated to an organization to look for indicators of attacks . by extracting signals from mis - configured dns and bgp networks as well as spam and phishing activities ,they build classifiers to predict if an organization is ( or will be ) under attack . similarly , soska et al .apply supervised classifiers to network traffic data to detect vulnerable websites , and predict their chances of turning malicious in the future . in recent years, online media such as blogs and social networks become another promising data source of security intelligence .most existing work focuses on technology blogs and tweets from _ security professionals _ to extract useful information .for example , liao et al . builds text mining tools to extract key attack identifiers ( ip , md5 hashes ) from security tech blogs .sabottke et al .leverage twitter data to estimate the level of interest in existing cve vulnerabilities , and predict their chance of being exploited in practice .our work differs from existing literature since we focus on crowdsourced data from the much broader user populations who are likely the _ victims _ of security attacks .the most related work to ours is which uses weakly supervised learning to detect security related tweets .however , this technique is unable to capture the dynamically evolving nature of attacks and is unable to encode characteristics of detected events .* event extraction and forecasting on twitter . * another body of related work focuses on twitter to extract various events such as trending news , natural disasters , criminal incidents and population migrations .common event extraction methods include simple keyword matching and clustering , and topic modeling with temporal and geolocation constrains .event forecasting , on the other hand , aims to predict future evens based on early signals extracted from tweets .example applications include detecting activity planning and forecasting future events such as civil unrest and upcoming threats to national airports . in our work ,we follow a similar intuition to detect signals for major security attacks .the key novelty in our approach , different from these works , is the need for a typed query expansion strategy that provides both focused results and aids in extracting key indicators underlying the cyber - attack .cp1.2cmp1.9cmp1.70cmp1.70cmcccp1.85 cm & & & [ 0.65em]goal & [ 0.65em]data + & unsupervised & keyword expansion & information extraction & characterize event & type & detection & & + & & & & & & & cyberattacks & network data + & & & & & & & cyberattacks & twitter + & & & & & & & cyberattacks & wine + & & & & & & & malware & papers + & & & & & & & malware & wine + & & & & & & & vulnerability & twitter + & & & & & & & intrusion & network data + & & & & & & & intrusion & network data + & & & & & & & intrusion & network data + & & & & & & & insider & access log + & & & & & & & ioc & tech blogs + ours & & & & & & & cyberattacks & twitter +we have demonstrated an unsupervised approach to extract and encode cyber - attacks reported and discussed in social media .we have motivated the need for a careful template - driven query expansion strategy , and how the use of dependency parse trees and word embeddings supports event extraction . giventhe widespread prevalence of cyber - attacks , tools such as presented here are crucial to providing situational awareness on an ongoing basis .future work is aimed at broadening the class of attacks that the system is geared to as well as at modeling sequential dependencies in cyber - attacks .this will aid in capturing characteristics such as increased prevalence of attacks on specific institutions or countries during particular time periods .r. j. kate .a dependency - based word subsequence kernel . in _ proceedings of the conference on empirical methods in natural language processing _ , pages 400409 .association for computational linguistics , 2008 .q. mei and c. zhai .discovering evolutionary theme patterns from text : an exploration of temporal text mining . in _ proceedings of the eleventh acm sigkdd international conference on knowledge discovery in data mining _ ,pages 198207 .acm , 2005 .m. ovelgonne , t. dumitras , b. a. prakash , v. s. subrahmanian , and b. wang . understanding the relationship between human behavior and susceptibility to cyber - attacks : a data - driven approach . in _ proc .tist16 _ , 2016 .t. sakaki , m. okazaki , and y. matsuo .earthquake shakes twitter users : real - time event detection by social sensors . in _ proceedings of the 19th international conference on world wide web _ ,pages 851860 .acm , 2010 . | social media is often viewed as a sensor into various societal events such as disease outbreaks , protests , and elections . we describe the use of social media as a crowdsourced sensor to gain insight into ongoing cyber - attacks . our approach detects a broad range of cyber - attacks ( e.g. , distributed denial of service ( ddos ) attacks , data breaches , and account hijacking ) in an unsupervised manner using just a limited fixed set of seed event triggers . a new query expansion strategy based on convolutional kernels and dependency parses helps model reporting structure and aids in identifying key event characteristics . through a large - scale analysis over twitter , we demonstrate that our approach consistently identifies and encodes events , outperforming existing methods . |
since complete genomes of many organisms are available from web - based databases , a full and systematic search of genome structures , functions and dynamics becomes an essential part of the study for both biologists and physicists .for the large amount of genomes , developing quantitative methods to extract meaningful information is a major challenge with respect to applications of statistical mechanics and nonlinear dynamics to biological systems . to understand the complete genomes ,some statistical and geometrical methods were developed .the studies of the complete genomes of many organisms came up with the determinations of the nontrivial statistical characteristics , such as the long - range correlations , the short - range correlations and the fractal features or genomic signatures .in particular , it was found that the transposable elements , as the mobile dna sequences , have the ability to move from one place to another and make many replicas within the genome via the transposition .their origin , evolution , and tremendous effects on the genome structure and the gene function are issues of fundamental importance in biology . in general , the symbolic dynamics and the recurrence plots are basic methods of nonlinear dynamics for analyzing complex systems .although the conventional methods have made great strides in understanding genetic patterns , they are required to analyze the so - called junk dna with complex functions governing mutations .recently , a one - to - one metric representation of a genome borrowed from the symbolic dynamics was proposed to form a fractal pattern in a plane . by using the metric representation method , the recurrence plot technique of the genomewas established to analyze the correlation structures of nucleotide strings .the transference of nucleotide strings appears at many positions of a complete genome and makes a regular and irregular correlation structures , but the periodic correlation structures in the complete genome are the most interesting in view of the dynamics . in this paper , using the metric representation and the recurrence plot method , we identify periodic correlation structures in bacterial and archaeal complete genomes and analyze the mechanism of the periodic correlation structures .since the nucleotide strings include transposable elements , the mechanism is conducible to understanding the genome structures in terms of nucleotide strings transferring in the genomes and exploring relations between transference of nucleotide strings and the transposable elements .in what follows , we give a brief presentation of the metric representation and the recurrence plot method , which are detailed in . for a given symbolic sequence ( ) ,a metric representation for its subsequences ( ) is defined as where is 0 if or 1 if and is 0 if or 1 if .it maps the one - dimensional symbolic sequence to the two - dimensional plane ( ) .the subsequences with the same ending -nucleotide string are labeled with .they correspond to points in the zone encoded by the -nucleotide string . with two subsequences and ( ) , we calculate where and is the heaviside function [ , if ; , if . when , i.e. , , a point is plotted on a plane . repeating the above process for ] , we obtain a recurrence plot of the symbolic sequence . to present the correlation structure in the recurrence plot plane, we define a correlation intensity at a given correlation distance as which displays the transference of -nucleotide strings in the symbolic sequence . on the recurrent plot plane , since and , the transferring element has a length at least .we calculate the maximal value of to satisfy i.e. , and .the transferring element has a length and is placed at the positions and , which implies the correction distance . to understand the transferring characteristics of a complex genome, we investigate the correlation structures of simple periodic and random sequences . by randomly combining the four letters a , c , g and t , we firstly generate two random nucleotide sequences :one has the length of 67 and another has the length of 5000 .then , a periodic nucleotide sequence with the total length of 5000 is formed by repeating the short nucleotide string . using the metric representation and the recurrence plot method, we may determine the correlation intensities at different correlation distances with for the periodic and random sequences in fig .it is evident that there exist equidistant parallel lines with a basic correlation distance in fig .1(a ) , to form the periodic correlation structure for the periodic sequence .the basic correlation distance hereinafter called the basic periodic length is determined as .the correlation intensity decreases linearly with the increase of the correlation distance ( , , ) . however , in fig .1(b ) , the correlation intensity is very small , so there are almost no correlation structures for the random sequence .therefore , the periodic and random sequences exhibit two very different transferring characteristics : with the periodic correlation structure with a linearly decreasing intensity and without a clear correlation structure .at the end of 1999 , complete genomes including more of 20 bacteria were in the genbank . by using the string composition and the metric representation method , the suppressions of all short strings in 23 bacterial and archaeal complete genomes were determined . in this section , using the metric representation and the recurrence plot method , we determine all long periodic nucleotide strings ( bases ) in the 23 bacterial and archaeal genomes . for the 23 genomes , only 13 have long periodic nucleotide strings .all basic strings and their lengths of the long periodic nucleotide strings in the 13 bacterial and archaeal genomes are presented in table i in the order of decreasing suppressions of nucleotide strings .several periods and different basic strings can be seen depending on the genomes , but not necessarily on the lengths of genomes .the genomes of helicobacter pylori 26695 ( ) , helicobacter pylori j99 ( ) , haemophilus influenzae rd kw20 ( ) , mycobacterium tuberculosis h37rv ( ) , synechocystis sp .pcc6803 ( ) have more periods ( ) and basic strings ( ) than others , which have only fewer periods ( ) and basic strings ( ) . in each period, the number of the basic strings generally depends on the length of the period . the longer / shorter period the basic strings have ,the smaller / greater their number will be . in the next section , we will investigate the periodic transference of nucleotide strings in the bacterial and archaeal complete genoms and analyze the effects of periodic nucleotide strings on the correlation structures .the periodic correlation structures of a complete genome contain several basic periodic and/or quasi - periodic lengths , which are determined by using the metric representation and the recurrence plot method as follows . from the relationship between the correlation intensity and the correlation distance obtained by using eq .( 3 ) , the basic periodic lengths and their integer multiples with strong correlation intensities can be calculated . moreover , in the transference of nucleotide strings obtained by using eq .( 4 ) , the correlation distance with basic periodic lengths and their integer multiples can also be found . by using both methods , the basic periodic lengths of the periodic correlation structuresare determined , as shown in table ii , where the 23 complete genomes with official genbank accession numbers are arranged in the order of decreasing suppressions of nucleotide strings .when the periodic correlation structures have only a few peaks of the correlation intensity within the correlation distance , the basic periodic lengths are put in parentheses . to see the characteristics of the periodic correlation structures, we also present all basic string lengths in long periodic nucleotide strings ( bases ) in table ii . when a periodic correlation structure is identified based on a long periodic nucleotide string , the transference of nucleotide strings composed of the basic strings appears at some positions where the correlation distance is integer multiples of the period and monotonically increases . at the same time , the lengths of transferred nucleotide strings monotonically decrease .there exists a `` cascade '' arrangement of nucleotide strings related to the basic periodic length .however , when a periodic correlation structure is identified based on non - periodic nucleotide strings , the transference of nucleotide strings appears at several positions where the correlation distance is almost integer multiples of the basic periodic length .there are no `` cascade '' arrangements of nucleotide strings related to the basic periodic length .according to the characteristics of the periodic correlation structures , the results can be summarized as follows : ( 1)the correlation distance contains a single increasing period .the most of the complete genomes with a single increasing period have a basic periodic length of 67 .they include methanococcus jannaschii dsm 2661 ( _ mjan _ ) , methanobacterium thermoautotrophicum str .delta h ( _ mthe _ ) , pyrococcus horikoshii ot3 ( _ pyro _ ) , archaeoglobus fulgidus dsm 4304 ( _ aful _ ) , pyrococcus abyssi ( _ pabyssi _ ) and thermotoga maritima msb8 ( _ tmar _ ) genomes . consider the _genome as an example .2 displays the correlation intensity at different correlation distances with for the _ mjan _ genome .it is evident that there exist some equidistant parallel lines with a basic periodic length , to form a periodic correlation structure .the basic periodic length is determined as .generally , if the genome has a periodic nucleotide string with the basic string length , it would tend to form a periodic correlation structure . in tableii , the _ mjan _ genome has the correspondent basic string length for periodic nucleotide strings .for example , the nucleotide string ( 237122 - 237620 ) with is formed by repeating the basic string with the length , where . in other words ,the basic string is duplicated to the positions with the correlation distances , , , , , and . despite possible contribution from such periodic nucleotide strings , the periodic correlation structure is mainly formed by the transference of non - periodic nucleotide strings , which has approximately the same increasing period .for example , the nucleotide string ( 447 - 476 ) with is transferred to the places ( 514 - 543 ) , ( 581 - 610 ) , ( 651 - 680 ) , ( 718 - 747 ) , ( 785 - 814 ) , ( 855 - 884 ) , ( 922 - 951 ) , ( 994 - 1023 ) , ( 1064 - 1093 ) , ( 1132 - 1161 ) and with the correlation distances , , , , , , , , , and , respectively . since the nucleotide string is neither periodic nor a part of a periodic nucleotide string , its periodic transference is not a repetition of basic periodic nucleotide strings. moreover , fig .2 shows that there also exists a cluster of basic periodic lengths close to .their integer multiples are distributed near the periodic correlation structure .table ii shows that there also exists another basic string length for periodic nucleotide strings , which is conducible to form the cluster distribution near the periodic correlation structure .so both the repetition of basic periodic nucleotide strings and the transference of non - periodic nucleotide strings would form the periodic correlation structure with approximately the same increasing period .besides the _ mjan _ genome , the other genomes ( _ mthe _ , _ pyro _ , _ aful _ , _ pabyssi _ and _ tmar _ ) have no periodic nucleotide strings with the basic string length to make contributions to the periodic correlation structure .so the periodic correlation structure is formed by the transference of non - periodic nucleotide strings .furthermore , the genomes of mycoplasma genitalium g37 ( _ mgen _ ) , _ hinf _ , mycoplasma pneumoniae m129 ( _ mpneu _ ) , treponema pallidum subsp .pallidum str .nichols ( _ tpal _ ) , aeropyrum pernix k1 ( _ aero _ ) , rickettsia prowazekii str .madrid e ( _ rpxx _ ) and borrelia burgdorferi b31 ( _ bbur _ ) have basic periodic lengths , 4 , 12 , 24 , 65 , 84 and 162 , respectively . in tableii , they correspond to periodic nucleotide strings with the basic length except the _ aero _ genome .so both the repetition of basic periodic nucleotide strings and the transference of non - periodic nucleotide strings would form the periodic correlation structure with approximately the same increasing period .\(2 ) the correlation distance contains several increasing periods . the escherichia coli k-12 mg1655 ( _ ecoli _ ) genome has two basic periodic lengths 100 and 113 .the _ hpyl99 _ genome has three basic periodic lengths 8 , 15 and 21 .the _ mtub _genome has three basic periodic lengths 9 , 15 and 57 .consider the _hpyl99 _ genome as an example .3 displays the correlation intensity at different correlation distances with for the _ hpyl99 _ genome .it is evident that there exist some equidistant parallel lines with basic periodic lengths , to form periodic correlation structures .three basic periodic lengths are determined as , and .although there are some peaks of the correlation intensity in the correlation distance as shown in fig .3 , they do not form any periodic correlation structures and are not accounted .table ii also shows some periodic nucleotide strings with basic string lengths , , and their integer multiples , which contribute to the periodic correlation structures .for example , the nucleotide string ( 1061079 - 1061153 ) with is formed by repeating the basic string with the length , where .the nucleotide string ( 5153 - 5280 ) with is formed by repeating the basic string with the length , where . the nucleotide string ( 659300 - 659450 ) with is formed by repeating the basic string with the length , where .although the transference of non - periodic nucleotide strings might also contribute to the periodic correlation structures , they are mainly formed by repeating the basic periodic nucleotide strings .for example , the non - periodic nucleotide string ( 59514 - 59537 ) with is transferred to the places ( 59640 - 59663 ) and ( 59724 - 59747 ) with the correlation distances and , respectively .so both the repetition of basic periodic nucleotide strings and the transference of non - periodic nucleotide strings would form the periodic correlation structures with approximately the same increasing periods .\(3 ) the correlation distance has an increasing quasi - period .the bacillus subtilis subsp .subtilis str .168 ( _ bsub _ ) genome has a basic quasi - periodic length of 5000 .4 shows the correlation intensity at different correlation distances with for the _ bsub _ genome .it is evident that there exist some approximately equidistant parallel lines at the positions , 10605 , 15427 and 20468 , to form a quasi - periodic correlation structure with a basic quasi - periodic length .although a stronger correlation intensity appears at the position , it is far away from the quasi - periodic correlation structure and is not accounted . in table ii, there are no periodic nucleotide strings with the length to make a contribution to the quasi - periodic correlation structure . for example , the non - periodic nucleotide string ( 167978 - 169382 ) with is transferred to the place ( 172974 - 174378 ) with the correlation distance .the non - periodic nucleotide string ( 161449 - 161666 ) with is transferred to the places ( 167057 - 167274 ) , ( 172056 - 172273 ) and ( 946761 - 946798 ) with the correlation distances , and , respectively .so the transference of non - periodic nucleotide strings would form the quasi - periodic correlation structure .\(4 ) the correlation distance contains a combination of several increasing periods and an increasing quasi - period .firstly , the _ hpyl _ genome has two basic periodic lengths 7 , 8 and a basic quasi - periodic length of 114 . fig .5(a ) shows the correlation intensity at different correlation distances with for the _ hpyl _ genome , with a local region magnified .it is evident that there exist some equidistant parallel lines with basic periodic lengths , to form periodic correlation structures in a short range of the correlation distance .the two basic periodic lengths are determined as and .moreover , in fig .5(a ) , there also exist some approximately equidistant parallel lines at the positions , 207 , 324 , 438 , 552 , 666 and 780 , to form a quasi - periodic correlation structure in a long range of the correlation distance .the quasi - periodic correlation distance is described as , where the basic quasi - periodic length is 114 and . in table ii , there exist some periodic nucleotide strings with basic string lengths , and their integer multiples , but no periodic nucleotide strings with the basic string length .for example , the nucleotide string ( 1 - 181 ) with is formed by repeating the basic string with the length , where .the nucleotide string ( 444403 - 444490 ) with is formed by repeating the basic string with the length , where .although the transference of non - periodic nucleotide strings might also contribute to the periodic correlation structures , they are mainly formed by repeating the basic periodic nucleotide strings .for example , the non - periodic nucleotide string ( 84905 - 84926 ) with is transferred to the place ( 84929 - 94850 ) with the correlation distance .moreover , for the quasi - periodic correlation structure , the non - periodic nucleotide string ( 556196 - 556224 ) with is transferred to the places ( 556634 - 556662 ) , ( 556748 - 556776 ) , ( 556862 - 556906 ) , ( 557300 - 557328 ) , ( 557414 - 557442 ) and ( 557852 - 557880 ) with the correlation distances , , , , and , respectively .so both the repetition of basic periodic nucleotide strings and the transference of non - periodic nucleotide strings would form the periodic correlation structures with approximately the same increasing periods in a short correlation distance , but only the transference of non - periodic nucleotide strings would form the quasi - periodic correlation structure in a long correlation distance .secondly , the _ synecho _ genome has two basic periodic lengths 6 , 888 and a basic quasi - periodic length of 296 .5(b ) shows the correlation intensity at different correlation distance with for the _ synecho _ genome , with a local region magnified .it is evident that there exist some equidistant parallel lines with basic periodic lengths , to form periodic correlation structures in short and long ranges of correlation distances , respectively .two basic periodic lengths are determined as and . moreover , in fig .5(b ) , there also exist some approximately equidistant parallel lines at the positions and , where the basic quasi - periodic length is 296 and .they form quasi - periodic correlation structures in a long range of the correlation distance . in table ii , there exist some periodic nucleotide strings with integer multiples of and the basic string length , but no periodic nucleotide strings with the basic string length .for example , the nucleotide string ( 527703 - 527770 ) with is formed by repeating the basic string with the length , where .the nucleotide string ( 2354010 - 2355833 ) with is formed by repeating the basic string with the length , where .moreover , the nucleotide string ( 527395 - 527434 ) with is transferred to the places ( 527473 - 527512 ) , ( 527491 - 527530 ) , ( 527509 - 527548 ) , ( 527527 - 527566 ) and ( 527545 - 527584 ) with the correlation distances , , , , and , respectively .the non - periodic nucleotide string ( 2354010 - 2354300 ) with is transferred to the places ( 2356674 - 2356964 ) , ( 2357562 - 2357852 ) and ( 2358450 - 2358740 ) with the correlation distances , and , respectively . both the repetition of basic periodic nucleotide strings and the transference of non - periodic nucleotide strings would form the periodic correlation structures with approximately the same increasing periods in short and long correlation distances , but only the transference of non - periodic nucleotide strings would form the quasi - periodic correlation structures in a long correlation distance .\(5 ) the correlation distance contains almost no increasing periods .the genomes of aquifex aeolicus vf5 ( _ aquae _ ) , rhizobium sp . ngr234 plasmid pngr234a ( _ pngr234 _ ) , chlamydophila pneumoniae cwl029 ( _ cpneu _ ) and chlamydia trachomatis d / uw-3/cx ( _ ctra _ ) are among cases with such characteristics .consider the _genome as an example .6 shows the correlation intensity at different correlation distances with for the _ aquae _ genome .it is evident that there exist some equidistant parallel lines with a basic periodic length , which is determined as .however , for the basic periodic length , the maximal correlation intensity is only 179 and the correlation structure has only three peaks of the correlation intensity at the positions , and . the weak correlation intensity with a few peaks in the correlation distance may not make any periodic correlation structures . in table ii , there are also no periodic nucleotide strings for the almost non - periodic correlation structure .so the _ aquae _ genome almost has no periodic correlation structures .in summary , using the metric representation and the recurrence plot method , we have observed periodic correlation structures in bacterial and archaeal complete genomes .all basic periodic lengths in the periodic correlation structures are determined .on the basis of the periodic correlation structures , the bacterial and archaeal complete genomes , as classified into five groups , display four kinds of fundamental transferring characteristics : a single increasing period , several increasing periods , an increasing quasi - period and almost noincreasing period .the mechanism of the periodic correlation structures is further analyzed by determining all long periodic nucleotide strings in the bacterial and archaeal complete genomes and is explained as follows : both the repetition of basic periodic nucleotide strings and the transference of non - periodic nucleotide strings would form the periodic correlation structures with approximately the same increasing periods . in comparison with the complete genome of the saccharomyces cevevisiae yeast , it is found that the bacterial , archaeal and yeast complete genomes have the same four kinds of fundamental transferring characteristics of nucleotide strings .they choose preferably the basic periodic length or its double in the periodic correlation structures , even they do not have basic string lengths of long periodic nucleotide strings , which are equal to the basic periodic lengths .the basic periodic length was also found in the correlation analysis of the human genomes .although more and more biological functions of the junk dna in cells are found , the mystery of transposable elements in the whole genomes remains unraveled .the purpose of this work is to depict the genome structure in the bacterial and archaeal complete genomes and explain the genome dynamics in terms of nucleotide string transfer .the proposed periodic correlation structures with approximately the same increasing periods may have fundamental importance for the biological functions of the junk dna . *acknowledgments * we would like to thank the national science foundation for partial support through the grant no .11172310 and the imech / sccas shenteng 1800/7000 research computing facility for assisting in the computation .p. j. deschavanne , a. giron , j. vilain , g. fagot , and b. fertil , genomic signature : characterization and classification of species assessed by chaos game representation of sequences .* 16 * ( 1999 ) 1391 .p. katsaloulis , t. theoharis , w .- m .zheng , b .-hao , a. bountis , y. almirantis and a. provata , long - range correlations of rna polymerase ii promoter sequences across organisms , _ physica a _ * 366 * ( 2006 ) 308 .eckmann , s. o. kamphorst and d. ruelle , recurrence plots of dynamical systems , _ europhys ._ , * 5 * , ( 1987 ) 973 ; n. marwan , m. c. romano , m. thiel and j. kurths , recurrence plots for the analysis of complex systems , _ phys .* 438 * ( 2007 ) 237 . | the periodic transference of nucleotide strings in bacterial and archaeal complete genomes is investigated by using the metric representation and the recurrence plot method . the generated periodic correlation structures exhibit four kinds of fundamental transferring characteristics : a single increasing period , several increasing periods , an increasing quasi - period and almost noincreasing period . the mechanism of the periodic transference is further analyzed by determining all long periodic nucleotide strings in the bacterial and archaeal complete genomes and is explained as follows : both the repetition of basic periodic nucleotide strings and the transference of non - periodic nucleotide strings would form the periodic correlation structures with approximately the same increasing periods . * keywords * bacterial and archaeal complete genomes , periodic correlation structures , metric representation , recurrence plots + |
with the advances in acoustic communication technology , the interest in study and experimental deployment of underwater networks has been growing .however , underwater acoustic channels impose many constraints that affect the design of wireless networks .they are characterized by a path loss that depends on both the transmission distance and the signal frequency , a feature that distinguishes an underwater acoustic system from a terrestrial radio system .thus , not only the power consumption , but also the useful bandwidth depend on the transmission distance . from an information theoretic perspective , both the distance between two nodes and the required capacity determine the power consumption for that link and the optimal transmission band .it is thus of interest to have a simple , closed - form expression that relates the transmission power to the desired capacity .this would enable an efficient design of both point to point links and underwater networks , eventually leading to a minimum cost overall network optimization .thus , these expressions may be useful from both a theoretic and an engineering standpoint . in this paper ,simple closed - form approximations for the power consumption and operating frequency band as functions of distance and capacity are presented .this approximate model stems from an information theoretic analysis that takes into account a physical model of acoustic propagation loss , and colored gaussian ambient noise .it was shown in that the transmission power as a function of the distance could be well approximated by .a similar relationship was shown to exist for the operating bandwidth .the coefficients in this model were determined as functions of the required signal to noise ratio .the present work extends this idea of modeling the power and bandwidth as functions of distance , but the problem is cast into a slightly different framework .namely , instead of using the snr as a constraint , i.e. a fixed design parameter , the desired link capacity is used as a figure of merit . in few words, this work proposes approximate models for the parameters as functions of the capacity .this resulting model is useful for a broad range of capacities and distances .the paper is organized as follows . in section 2 , a model of an underwater channelis outlined . in section 3 , a brief description of the numerical evaluation procedure is described . in section 4 ,closed - form expressions for the parameters of interest are presented .section 5 gives numerical results for different ranges of distance and capacity .conclusions are summarized in the last section ., and , by incrementing at each step by until a stopping condition is fulfilled , width=336,height=288 ] and for and approximate model. ] , , and , width=336,height=336 ] and for and approximate model. ],, and , width=336,height=336 ] and for and approximate model. ] , , and , width=336,height=336 ] and for and approximate model. ] , , and , width=336,height=336 ]an underwater acoustic channel is characterized by a path loss that depends on both distance _l _ and signal frequency _f _ as where _ k _ is the spreading factor and is the absorption coefficient .the spreading factor describes the geometry of propagation , e.g. corresponds to spherical spreading , to cylindrical spreading , and to practical spreading .the absorption coefficient can be expressed in db / km using thorp s empirical formula for in khz : for frequencies above a few hundred hz . for lower frequencies ,the model is : the noise in an acoustic channel can be modeled through four basic sources : turbulence , shipping , waves , and thermal noise .the following formulas give the power spectral density ( psd ) of these noise components in db re pa per hz as a function of frequency in khz : where the shipping activity ranges from 0 to 1 , for low and high activity , respectively , and corresponds to the wind speed measured in m / s .the overall psd of the ambient noise is given by let us assume that this is a gaussian channel .then , the capacity of this channel can be obtained using the waterfilling principle .also , assume that the power and band of operation can be adjusted to reach a certain capacity level .thus , the capacity of a point - to - point link is where is the optimum band of operation .this band could be thought of as a union of non - overlapping intervals , ] .the power consumption associated with a particular choice of is given by where .evidently , these expressions are quite complicated to be used in a computational network analysis .also , they provide little insight into the relationship between power consumption , and , in terms of the pair .this motivates the need for an approximate model that will represent these relations for ranges of and that are of interest to acoustic communication systems .the model should also provide flexibility to changing other parameters , such as the spreading factor , wind speed and shipping activity .the dependence on the spreading factor is quite simple .let us assume that a model for has been developed for a particular value of , i.e. . to determine for ,let us note that for a change in , the product constitutes a constant scaling factor with respect to .therefore , for a link of distance the term will remain unchanged .thus , if the same capacity is required for and , equation shows that the only other term that can vary is , i.e. .then , .finally , let us use the equation to determine the relationship between and .the dependence on the spreading factor is quite simple .let us assume that a model for has been developed for a particular value of , i.e. . to determine for .note that for a change in , the product constitutes a constant scaling factor with respect to .therefore , for a link of distance the term will remain unchanged .thus , if the same capacity is required for and , equation , shows that the only other term that can vary is , i.e. .then , .finally , let us use equation to determine the relation between and . thus , any model for the transmission generated for some parameter has a simple extension . also , note that the transmission bandwidth remains the same for any value of .and for and approximate model. ] , , and , width=336,height=336 ] and for and approximate model. ],, and , width=336,height=336 ] a numerical evaluation procedure similar to that in is used to compute the value of , and , for a region of values of .the procedure starts by fixing a target value of the capacity .then , for each distance , the initial value of is set to the minimum value of the product , i.e. . the frequency at which this occurs , i.e. , is called the optimal frequency . after this , is increased iteratively by a small amount ( figure [ anplot.tag ] ) , until the target capacity value is met .finally , this procedure is repeated for each value of in a range of interest . at the -th step of the procedure ,when is increased by a small amount , the band is determined for that iteration .this band is defined as the range of frequencies for which the condition .then , the capacity is numerically determined for the current and , using the equation ( 9 ) . if , a new iteration is performed . otherwise , the procedure stops .by applying the above procedure for varying and , one arrived at the complete model for the power consumption where below , two ranges of operation were studied .the first one is for km ] , and propagation factor of , which will be called case 1 from here on .the second one is for kms ] , and propagation factor of , which will be called case 2 . for both regions and different ranges of and ,the power consumption can be approximated by equations , and .similar model are found to provide a good fit for the high / end frequency and for the bandwidth .these models are given by where transmission power , highest frequency and bandwidth of transmission band were computed for a variety of values of , and two ranges of interest of the pair , i.e. kms ] , and kms ] .the models proposed fitted these cases quite well .results are presented for the case of , and , for both cases . also for case 1, it will be seen that the and parameters show almost no dependence on the shipping activity factor , especially if the wind speed is .thus , the approximate model for this case could be simplified to only consider as part of the model , instead of the pair .as function of . ],, , width=336,height=403 ] as function of . ],, , width=336,height=403 ] figures [ figplc_k_15_lowc.tag ] , [ fhighlc_k_15_lowc.tag ] and [ blc_k_15_lowc.tag ] show parameters and for , , and , respectively .this approximation was carried out for the first case with a propagation factor of , a shipping activity of and a wind speed of .the values of s and s are shown in table [ table_a1_lowc ] and [ table_a2_lowc ] , for parameters and , respectively .these tables also show the mean square error ( mse ) of the approximation with respect to the actual parameters . in figure[ blc_k_15_lowc.tag ] , there is a considerable variation in the values of parameter .however , note that the y - axis of the plot shows very little variation .[ table_a1_lowc ] . approximation parameter values for , and , with ],, and [ cols="^,^,^,^,^,^",options="header " , ]this paper offers an insight into the dependence of the transmission power , bandwidth , and the band - edge frequency of an underwater acoustic link on the capacity and distance .it provides closed - form approximate models for the time - invariant acoustic channel , taking into account a physical model of acoustic path loss and the ambient noise , assuming that the channel is gaussian .these approximate models where shown to provide a good fit to the actual empirical values by numerical evaluation for different ranges of distance and capacity , as well as noise profiles corresponding to different shipping activity factor and wind speed .the band - edge frequency and the bandwidth were also shown to be invariant to the spreading factor , while the power scales as . for a certain range of values ( l , c ) , the approximate model of shown to be almost independent of the shipping activity factor while having a marked dependency on the wind speed .this dependence , however , is quite smooth and could be approximated by a simple model , thus resulting in a complete model for the for a range of values ( l , c ) that is of interest to a typical underwater communciation system .hence , these models can be used in network optimization problems to determine the optimal power consumption for some required data rate .future work will focus on studying convexity properties of the model and using it in network optimization problems .this work was supported in part by the nsf grants # 0520075 and onr muri grant # n00014 - 07 - 1 - 0738 , and darpa bae systems national security solution , inc .subcontract # 060786 . 1 partan , j. , kurose , j. , levine , b. n. , `` a survey of practical issues in underwater networks '' , in proc .wuwnet 06 , pp .17 - 24 , los angeles , sept .2006 stojanovic , m. , `` on the relationship between capacity and distance in an underwater acoustic communication channel '' , in proc .wuwnet 06 , pp .41 - 47 , los angeles , sept . 2006 | the underwater acoustic channel is characterized by a path loss that depends not only on the transmission distance , but also on the signal frequency . as a consequence , transmission bandwidth depends on the transmission distance , a feature that distinguishes an underwater acoustic system from a terrestrial radio system . the exact relationship between power , transmission band , distance and capacity for the gaussian noise scenario is a complicated one . this work provides a closed - form approximate model for 1 ) power consumption , 2 ) band - edge frequency and 3 ) bandwidth as functions of distance and capacity required for a data link . this approximate model is obtained by numerical evaluation of analytical results which takes into account physical models of acoustic propagation loss and ambient noise . the closed - form approximations may become useful tools in the design and analysis of underwater acoustic networks . |
we consider the fowler equation : ( x ) - \partial_{x}^2 u(t , x ) = 0,\quad x \in \r , t>0 , \\ &u(0,x ) = u_0(x),\quad x \in \r , \end{aligned } \right .\label{fowlereqn5}\ ] ] where represents the dune height and is a nonlocal operator defined as follows : for any schwartz function and any , ( x ) : = \int_{0}^{+\infty } |\xi|^{-\frac{1}{3 } } \varphi''(x-\xi ) \ ,d\xi . \label{nonlocalterm4}\ ] ] we refer to for theoretical results on this equation .the nonlocal term is anti - diffusive .indeed , it has been proved in that \right)(\xi ) = - 4 \pi^2 \gamma\left(\frac{2}{3}\right ) \left(\frac{1}{2}-i \ , \mbox{sgn}(\xi ) \frac{\sqrt{3}}{2}\right)|\xi|^{4/3 } , \ ] ] where denotes the fourier transform normalized in .thus , can been seen as a fractional power of order of the laplacian , with the `` bad '' sign .it will be clear from the analysis below that our results can easily be extended to the case where is replaced with a fourier multiplier homogeneous of degree ,2[ ] for all , from . we will denote by ; maps to itself .duhamel s formula for the continuous problem reads where is the kernel of the operator , and is defined by where are positive constants .recently , to solve the fowler equation some numerical experiments have been performed using mainly finite difference approximation schemes .however , these schemes are not effective because if we opt for an explicit scheme , numerical stability requires that the time step is limited by and , if we choose an implicit scheme , we have to solve a large system which is a computationally expensive operation .thus , the splitting method becomes an interesting alternative to solve the fowler model . to our knowledge, there is no convergence result in the literature for the splitting method associated to the fowler equation .this method is more commonly used to split different physical terms , such as reaction and diffusion terms , see for instance .splitting methods have also been employed for solving a wide range of nonlinear wave equations .the basic idea of this method is to decompose the original problem into sub - problems and then to approximate the solution of the original problem by solving successively the sub - problems .various versions of this method have been developed for the nonlinear schrdinger , korteweg - de - vries and modified korteweg - de - vries equations , see for instance . + for the fowler model , we consider , separately , the linear cauchy problem - \eta \ , \partial_{x}^2 v = 0 ; \quad v(0,x ) = v_0(x ) , \label{nlocal}\ ] ] and the nonlinear cauchy problem where are fixed _ positive _ parameters such that . equation is simply the viscous burgers equation .we denote by and , respectively , the evolution operator associated with and : where with where is the heat kernel defined by furthermore , the following -estimate holds let us explain the choice of this decomposition .first , we can remark that if we do not consider the nonlinear term in , the analytical solutions are available using the fourier transform .thus , the linear part may be computed efficiently using a fast fourier transform ( hereafter fft ) algorithms .note also that the laplacian and the fractional term can not be treated separately .indeed , the equation = 0 ] , which is the only term that we can not estimate in ( * ? ? ?* theorem 1 ) .we finally point out that we use smoothing effects associated to the viscous burgers equation ; see corollaries [ cor : condest ] and [ cor : split1 ] .we motivate this choice by the presence of artificial diffusion in classical numerical schemes used to solve the convection equations .an alternative to reduce this effect is to consider numerical schemes of high order which are usually computationally expensive and do not seem to be very useful for the fowler model because of the diffusion term .we consider the lie formula defined by the alternative definition could be studied as well , leading to a similar result . also , the following evolution operators corresponding to the strang method could be considered .following the computations detailed in the present paper for the case , it would be possible to show that the other lie formula generates a scheme of order one , and to prove that the strang method is of order two ( for smooth initial data ) , in the same fashion as in , e.g. , .this fact is simply illustrated numerically in section [ numerique ] , to avoid a lengthy presentation . with by , our main result is : [ theoreme1 ] for all and for all , there exist positive constants and such that for all ,\delta t_0] ] , and . it will follow from lemma [ lemme8 ] that } \|s^t u_0\|_{h^2(\r)}\le c_t(\|u_0\|_{h^1(\r)})\|u_0\|_{h^2(\r)},\ ] ] for some nonlinear ( increasing ) function depending on . in this paper , we begin by estimating the -stability for error propagation .we next prove that the local error of the lie formula is an approximation of order two in time .finally we prove that this evolution operator represents a good approximation , of order one in time , of the evolution operator in the sense of theorem [ theoreme1 ] .+ furthermore , we apply lie and strang approximations in order to make some numerical simulations using the split - step fourier experiments . +this paper is organised as follows . in the next sectionwe give some properties related to the kernels and , and we prove two fractional gronwall lemmas . in section [ sectionlemme ] , we establish some estimates on , , and . in section [ errorlocal ] , we prove a local error estimate .theorem [ theoreme1 ] is proved in section [ sec : proof ] .we finally perform some numerical experiments which show that the lie and strang methods have a convergence rate in and , respectively. * notations . * + - we denote by the fourier transform of which is defined by : for all , we denote by its inverse .+ - we denote by a generic constant , strictly positive , which depends on parameters , and . is assumed to be a monotone increasing function of its arguments .we begin by recalling the properties of kernels involved in the present analysis .[ kernel2 ] the kernel satisfies : 1 . , and ,\infty[\times \r\right) ] , . such that for all ,t] ] .2 . , .3 . , . such that for all , . such that for all , .[ remarkconvolution ] the kernel of has similar properties to the kernel .moreover , for all , we have [ lemme1 ] let \to \r_+ ] such that for all ] be a bounded measurable function and be a polynomial with positive coefficients and no constant term .we assume there exists two positive constants and ,1[ ] , then there exists such that for all ] , this completes the proof .in this paragraph , we collect several estimates concerning the convolutions with , and , which will be useful in the estimates of the local error of the scheme .[ estimnonlocal ] let and . then \in h^{s-4/3}(\r) ] , the existence and uniqueness part being standard , we focus on the estimates . the estimate yields ( formally , multiply by and integrate ) and the estimate ( differentiate with respect to , multiply by and integrate ) , the estimate shows that the map is non - increasing : an integration by parts and cauchy schwarz inequality then yield in order to take advantage of the smoothing effect provided by the viscous part , integrate in time and write gagliardo nirenberg inequality yields so using , we infer : in view of hlder inequality in the last integral in time , where we have used young inequality .we infer gagliardo nirenberg inequality now yields where we have used , hlder inequality and , successively .integrate the estimate with respect to time , and now discard the viscous part whose contribution is non - negative : the first part of the proposition then follows from the gronwall lemma . to complete the proof of the proposition , we use the general estimate , for : set . applying to yields integrating by parts , the last term is non - positive , since write integrating by parts the first term yields in view of kato - ponce estimate we have ( with and ) leaving out the viscous term , gronwall lemma yields the _ a priori _estimate where depends only on .in particular , for , gronwall lemma implies , where .we bootstrap , thanks to gagliardo nirenberg inequality again : therefore , for ] , the first point is a direct consequence of the relation and lemma [ lemme3 ] .+ the second point is readily established with lemma [ lemme3 ] and corollary [ cor : condest ] . [ estimapriori2 ]let and . then , the unique mild solution ; l^{2}(\r))\cap c(]0,t ] ; h^{2}(\r)) ] where . multiplying by and integrating with respect to the space variable, we get : -u_{xx}\right)u \ : dx = 0\ ] ] because the nonlinear term is zero .using and the fact that and -\partial_{xx}^2 u ) u \ , dx ] satisfies the proof is similar to the one given in lemma [ estimnl1 ] .differentiating the duhamel formula in space , we have using young inequality and proposition [ kernel2 ] , we infer , for any integer : in view of proposition [ kernel2 ] , this implies : for , we use lemma [ estimapriori2 ] to have the fractional gronwall lemma [ lemme1 ] with then yields where depends only on and . from, this implies the lemma in the case . for , leibniz rule and cauchy schwarz inequality yield the lemmathen easily follows by induction on .we will also need the fact that the flow map is uniformly lipschitzean on balls of .[ prop : lipschitz ] let .there exists such that if then .\ ] ] set , and .it solves -\partial_{xx}^2 w = v\d_x v - u\d_x u = -u\d_x w - w\d_x v.\ ] ] the energy estimate yields : where the term -\partial_{xx}^2 w\) ] , from the definition of and remark [ remarkconvolution ] , we have thus , from duhamel formula for the fowler equation and the lie formula , we have : where the remainder is written as then , from proposition [ kernel2 ] , corollary [ cor : split1 ] and lemma [ estimapriori2 ] , we have , for ] ) in the same way , from lemma [ lemmechaleur ] and corollary [ cor : condest ] , we control the term as from lemma [ lemme3 ] and corollary [ cor : condest ] , for the term , write by linearity of the evolution operator , we have hence now from sobolev embedding , lemma [ lemme3 ] and corollary [ cor : condest ] , we get : finally , since then for , and by integration for ] is discretized by equidistant points , with spacing .the spatial grid points are then given by , .if denotes the approximate solution to , the discrete fourier transform of the sequence is defined by for , and the inverse discrete fourier transform is given by for . here denotes the discrete fourier transform and its inverse . in what follows , the linear equation is solved using the discrete fourier transform and time marching is performed exactly according to to approximate the viscous burgers equation , we use the following explicit centered scheme : + \varepsilon \ , \delta t\frac{u_{j+1}^n-2 u^n_j + u_{j-1}^n } { \delta x^2},\ ] ] which is stable under the cfl - peclet condition where is an average value of in the neighbourhood of ) . in the case where the linear sub - equation is solved using a finite difference scheme instead of a fft computation ,an additional stability condition is required , see . moreover, the computation time becomes very long because of the discretization of the nonlocal term which is approximated using a quadrature rule .indeed , in , the fowler equation has been discretized using finite difference method and the numerical analysis showed that this operation is computationally expensive .this observation has also motivated the use of splitting methods , in particular the implementation of split - step fourier methods . in order to avoid numerical reflections due to boundaries conditions and to justify the use of the fft method , we consider initial data with compact support displayed in figure [ cinitiale ] to perform numerical simulations . since we do not know the exact solution of the fowler equation , a classical numerical way to determine the convergence numerical order of schemes is to plot the logarithm of the error in function of the logarithm of the step time , where and are computed for time steps and , respectively , up to the final time .hence , the numerical order corresponds to the slope of the curve , see figures [ ordcvglie ] , [ ordcvgstrang ] . for reference , a small line of slope one ( resp .two ) is added in figure [ ordcvglie ] ( resp . [ ordcvgstrang ] ) .we see that the slopes for the three initial data match well and so we can conclude that numerical simulations are consistent with the theoretical results established above .+ we also study numerical convergence of strang splittings using initial data displayed in figure [ cinitiale ] .results are plotted in figure [ ordcvgstrang ] .we can see that the strang formulation is of order two in time for smooth initial data ., _ the lie trotter splitting for nonlinear evolutionary problems with critical parameters . a compact local error representation and application to nonlinear schrdinger equations in the semi - classical regime _ , i m a j. numer .2012 ) . to appear | we consider a nonlocal scalar conservation law proposed by andrew c. fowler to describe the dynamics of dunes , and we develop a numerical procedure based on splitting methods to approximate its solutions . we begin by proving the convergence of the well - known lie formula , which is an approximation of the exact solution of order one in time . we next use the split - step fourier method to approximate the continuous problem using the fast fourier transform and the finite difference method . our numerical experiments confirm the theoretical results . |
precoding and power control are well studied strategies that support high spectral efficiency in wireless network with multiple antenna transceivers , when channel state information ( csi ) is available at the transmitters .several system - wide objective functions have been considered in the literature for precoding and power control optimization of broadcast channels ( bcs ) and interference channels ( ics ) .some of these problems are convex , for example power minimization or sinr balancing for the multiple - input single - output ( miso ) bc , and thus solvable with standard techniques in reasonable ( polynomial ) time . however ,in general , the problems at hand are non - convex .unlike convex problems , non - convex problems typically do not afford efficient ( i.e. , polynomial - time ) algorithms that are able to achieve global optimality .for example , it is known that the weighted sum - rate maximization ( wsrm ) in parallel ic channels , where interference from other users is treated as noise ( a non - convex problem ) is np - hard ( this result extends also to bc as a special case ) . in this paper, we address the global minimization of a system - wide , in general non - convex , cost function with respect to the transmit covariance matrices . among global techniques , branch - and - bound ( bb ) algorithmsare methods to solve general non - convex problems , producing an -suboptimal feasible point .bb methods have been already introduced to solve non - convex power control problems , although so far only multi - user single - input single - output systems have been addressed in and references therein . in this paper, we propose a novel bb framework for global optimization of a problem formulation that includes , for instance , miso bc and ic wsrm with general convex power constraint .the proposed bb approach is based on the observation that a fairly general set of cost functions that arise in communication s problems , albeit non - convex , possess a _ partly convex - monotone _ structure .this structure is satisfied whenever one can identify a suitable set of interference functions , for which the following hold : ( _ i _ ) the cost function is _ convex _ in the transmit covariance matrices once the interference functions are fixed ; ( _ ii _ ) the cost function is _monotone _ in the interference functions .we design the bb scheme to exploit the _ partly convex - monotone _ structure of the problem .branching is performed in a reduced space ( of the size of the set of all feasible interference level vectors ) , instead of the original feasible ( of the size of the set of all feasible covariance matrices ) .bounding is efficiently carried out by solving only convex optimization problems .in addition to the reduced - space bb method , we propose a suboptimal algorithm that attains quasi - optimal performance with polynomial complexity .this algorithm reduces to the distributed pricing scheme of , when applied to sum - rate maximization problems .numerical results are provided to compare the global optimal solution based on bb , the suboptimal ( pricing ) technique and the nonlinear dirty - paper coding scheme ._ notation _ : the boldface is used to denote matrices ( uppercase ) and vectors ( lowercase ) ; and denote the transpose and the hermitian transpose , respectively ; denotes the trace of a matrix ; ] , and the vector inequality means that _ { l}\leq\left [ \mathbf{y}\right ] _ { l} ] . while model ( [ eq : model ] )accounts for an ic , a bc can be obtained as a special case by setting .due to multi - user interference , the system performance depends on the transmission strategy of every user , i.e. , on the set of covariance matrices .we consider the minimization with respect to of a system - wide cost function ( to be defined below ) under a general convex set constraints : by defining a set of auxiliary variables , problem ( [ eq : f(q_w ) ] ) can be recast in the equivalent form the equivalence means that if is a solution to ( [ eq : f(q_w ) ] ) , then is a solution to ( [ eq : problem_auxiliar_i ] ) .conversely , if is a solution to ( [ eq : problem_auxiliar_i ] ) , then is a solution to ( [ eq : f(q_w)]).we further make the following assumptions : 1 .the interference levels are given by the real vector function , _ affine _ with respect to , that is bounded in the -dimensional rectangle \subset\mathbb{r}^{l} ] for ) .for instance , we typically have and the interference level at the -th receiver reads _ { k}=\sum\nolimits_{j=1,j\neq k}^{k}\mathbf{\mathbf{h}}_{jk}\mathbf{q}_{j}\mathbf{\mathbf{h}}_{jk}^{h} ] for fixed ; _ convex _ with respect to for fixed ] and , problem ( [ eq : ic wsrm ] ) is recast into . also , ] , meaning that _ { l}\leq\left [ \mathbf{\mathbf{c}}\right ] _ { l}\leq\left [ \mathbf{b}\right ] _ { l} ] , a _ lower bound _ is evaluated by solving the following problem: thanks to assumptions ( a1-a3 ) two fundamental results can be verified : ( _ i _ ) problem ( [ eq : pbeta ] ) is _ convex _ since the cost function is convex for a fixed and the constraints form a convex set , ( _ ii _ ) using standard convex optimization arguments , it can be shown that this bounding procedure satisfies the natural condition: moreover , denoting with the optimal solution of problem ( [ eq : pbeta ] ) , a valid upper bound is obtained by evaluating the function at , i.e. , .finally , the algorithm checks if the prescribed accuracy is met ( i.e. , if ) otherwise it goes back to the _ branching procedure_. here we proves convergence of the proposed bb algorithm .[ lemma 1 ] the proposed bb algorithm ( which is performed in the reduced space spanned by interference levels / variable ) , is _ convergent _ to a global optimal solution of problem .as explained above , since the chosen bounding procedure satisfyies ( [ eq : bound condition ] ) , the bb algorithm generates a sequences of partition sets collapsing to a point ( recall that is the rectangle selected for splitting at the -th branching iteration ) . in order to prove convergence we need to show that , as the size of rectangle gets smaller , is also sufficiently small .the proof follows standard arguments .this is shown in appendix .considering the bc wsrm scenario ( see sec.[marker : example utility ] ) , for a given interval ] , we define the function as the result of the following constraint optimization problem: using this notation , the lower bound in ( [ eq : pbeta ] ) is given by , while an upper bound is given by .from jointly - continuity of the function with respect to ( assumption a2 ) and from the definition of , we have that is continuous in the norm of ( i.e. , ) .it follows that also and will result continue in , thus it holds concluding the proof .9 f. rashid - farrokhi , k. r. liu , and l. tassiulas , transmit beamforming and power control for cellular wireless systems , in _ ieee journal on selected areas of communications _ , vol .14371450 , oct .1998 .v. balakrishnan , s. boyd , and s. balemi . branch and bound algorithm for computing the minimum stability degree of parameter - dependent linear systems .journal of robust and nonlinear control _, 1(4):295317 , october december 1991 .y. xu , t. le - ngoc , and s. panigrahi , global concave minimization for optimal spectrum balancing in multi - user dsl networks , _ ieee trans . on signal processing _ ,56 , no . 7 , pp .. 28752885 , jul . | utility ( e.g. , sum - rate ) maximization for multiantenna broadcast and interference channels ( with one antenna at the receivers ) is known to be in general a non - convex problem , if one limits the scope to linear ( beamforming ) strategies at transmitter and receivers . in this paper , it is shown that , under some standard assumptions , most notably that the utility function is decreasing with the interference levels at the receivers , a global optimal solution can be found with reduced complexity via a suitably designed branch - and - bound method . although infeasible for real - time implementation , this procedure enables a non - heuristic and systematic assessment of suboptimal techniques . in addition to the global optimal scheme , a real - time suboptimal algorithm , which generalizes the well - known distributed pricing techniques , is also proposed . finally , numerical results are provided that compare global optimal solutions with suboptimal ( pricing ) techniques for sum - rate maximization problems , affording insight into issues such as the robustness against bad initializations in real - time suboptimal strategies . nonconvex optimization , branch - and - bound , interference channel , multiple - input single - output channel |
chemical networks in interstellar clouds consist of gas - phase and grain - surface reactions .reactions that take place on dust grains include the formation of molecular hydrogen as well as reaction networks producing ice mantles and various organic molecules . unlike gas phase reactions in cold clouds that mainly produce unsaturated molecules , surface processes are dominated by hydrogen - addition reactions that result in saturated , hydrogen - rich molecules , such as h , ch , nh and ch .in particular , recent experiments show that methanol can not be efficiently produced by gas phase reactions . on the other hand ,there are indications that it can be efficiently produced on ice - coated grains .therefore , the ability to perform simulations of the production of methanol and other complex molecules on grains is of great importance . unlike gas - phase reactions , simulated using rate equation models , grain - surface reactions require stochastic methods such as the master equation , or monte carlo ( mc ) simulations .this is due to the fact that under interstellar conditions , of extremely low gas density and sub - micron grain sizes , surface reaction rates are dominated by fluctuations which can not be accounted for by rate equations .a significant advantage of the master equation over mc simulations is that it consists of differential equations , which can be easily coupled to the rate equations of gas - phase chemistry .furthermore , unlike mc simulations that require the accumulation of statistical information over long times , the master equation provides the probability distribution from which the reaction rates can be obtained directly .however , the number of equations increases exponentially with the number of reactive species , making the simulation of complex networks infeasible .the recently proposed multi - plane method dramatically reduces the number of equations , by breaking the network into a set of fully connected sub - networks , enabling the simulation of more complex networks .however , the construction of the multi - plane equations for large networks turns out to be difficult . in this letterwe introduce a method based on moment equations which exhibits crucial advantages over the multi - plane method . the number of equations is further reduced to the smallest possible set of stochastic equations , including one equation for the population size of each reactive specie ( represented by a first moment ) and one equation for each reaction rate ( represented by a second moment ) .thus , for typical sparse networks the complexity of the stochastic simulation becomes comparable to that of the rate equations .unlike the master equation ( and the multi - plane method ) there is no need to adjust the cutoffs - the same set of equations applies under all physical conditions . unlike the multi - plane equations , the moment equations are linear and for steady state conditionscan be easily solved using algebraic methods . moreover , for any given network the moment equations can be easily constructed using a diagrammatic approach , which can be automated .to demonstrate the method we consider a simple network , shown in fig . [ fig:1 ] , that involves three reactive species : h and o atoms and oh molecules . for simplicitywe denote the reactive species by h , o , oh , and the non - reactive product species by h , o , h .the reactions that take place in this network include h + o oh ( ) , h + h h ( ) , o + o o ( ) , and h + oh h ( ) . consider a spherical grain of diameter , exposed to fluxes of h and o atoms and oh molecules .the cross - section of the grain is and its surface area is .the density of adsorption sites on the surface is denoted by ( sites ) .thus , the number of adsorption sites on the grain is .the desorption rates of atomic and molecular species from the grain are given by ] , where is the activation energy for hopping of atoms ( or molecules ) .here we assume that diffusion occurs only by thermal hopping , in agreement with experimental results .for small grains it is convenient to define the scanning rate , , which is approximately the inverse of the time it takes an atom to scan the surface of the entire grain .the master equation provides the time derivatives of the probabilities that on a random grain there will be adsorbed atoms / molecules of the reactive specie .it takes the form \nonumber \\ & & + \sum_{i=1}^3 w_i \left [ ( n_i+1 ) p( .. ,n_i+1 , .. ) - n_i p(n_1,n_2,n_3 ) \right ] \nonumber \\ & & + \sum_{i=1}^2 a_i [ ( n_i+2)(n_i+1 ) p( .. ,n_i+2 , .. ) - n_i(n_i-1 ) p(n_1,n_2,n_3 ) ] \nonumber\\ & & + ( a_1+a_2 ) [ ( n_1 + 1)(n_2 + 1 ) p(n_1 +1,n_2 + 1,n_3 - 1 ) - n_1 n_2 p(n_1,n_2,n_3 ) ] \nonumber \\ & & + ( a_1+a_3 ) [ ( n_1 + 1)(n_3 + 1 ) p(n_1 + 1,n_2,n_3 + 1 ) - n_1 n_3 p(n_1,n_2,n_3 ) ] .\label{eq : master } \end{aligned}\ ] ] the terms in the first sum describe the incoming flux , where ( atoms s ) is the flux _ per grain _ of the specie .the second sum describes the effect of desorption .the third sum describes the effect of diffusion mediated reactions between two atoms of the same specie and the last two terms account for reactions between different species .the rate of each reaction is proportional to the number of pairs of atoms / molecules of the two species involved , and to the sum of their scanning rates .the moments of are given by , where are integers .in particular , is the average population size of the specie on a grain .the production rate per grain , ( molecules s ) , of molecules produced by the reaction is given by , or by in case that .in numerical simulations the master equation must be truncated in order to keep the number of equations finite .this can be done by setting upper cutoffs , on the population sizes , where is the number of reactive species .however , the number of coupled equations , , grows exponentially with the number of reactive species .this severely limits the applicability of the master equation to interstellar chemistry . to reduce the number of equations one tries to use the lowest possible cutoffs under the given conditions . in any case , to enable all reaction processes to take place , the cutoffs must satisfy for species that form homonuclear diatomic molecules ( h,o , etc . ) and for other species .the average population sizes of the reactive species and the reaction rates are completely determined by all the first moments and selected second moments of the distribution .therefore , a closed set of equations for the time derivatives of these first and second moments could provide complete information on the population sizes and reaction rates . for the simple network considered here one needs equations for the time derivatives of the first moments , and and of the second moments , , and [ nodes and edges , respectively , in the graph shown in fig .[ fig:1 ] ] .such equations are obtained by taking the time derivative of each moment and using eq .( [ eq : master ] ) to express the time derivatives of the probabilities . herewe show two of the resulting moment equations : in these equations , the time derivative of each moment is expressed as a linear combination of several other moments .however , the right hand sides of these equations include third order moments for which we have no equations . in order to close the set of moment equationswe must express the third order moments in terms of first and second order moments .this can be done by imposing the following constraint on the master equation : at any given time , at most two atoms or molecules can be adsorbed simultaneously on the surface .furthermore , these two atoms or molecules must be from species that react with each other .the resulting cutoffs allow only eight non - vanishing probabilities , namely , , , , , , , and .the third moments in eq .( [ eq : moment1 ] ) can now be expressed in terms of these non - vanishing probabilities , giving rise to the following rules : ( a ) ; ( b ) ; ( c ) . using these rules , which are general and apply to any network of binary reactions, one can modify eqs .( [ eq : moment1 ] ) , and obtain a closed set of the form where , and if and 0 otherwise .this set includes one equation that accounts for the population size of each reactive species and one equation that accounts for the rate of each reaction . although these equations were derived using strict cutoffs , that are expected to apply only in the limit of very small grains and low flux , they provide accurate results for a very broad range of conditions .the point is that once the set of moment equations is derived , the probabilities do not appear anymore , so the constraint is not explicitly enforced .in fact , the equations maintain their accuracy even when the populations sizes of the reactive species are well beyond the constraints imposed above . in fig .[ fig:1](a ) we present the population sizes of h ( ) and o ( ) atoms on a grain , vs. grain diameter , obtained from the moment equations . in fig .[ fig:1](b ) we present the production rates of h ( squares ) , o ( triangles ) and h ( circles ) vs. grain diameter , obtained from the moment equations .the results are in excellent agreement with the master equation ( solid lines ) . in the limit of large grainsthey also coincide with the rate equations ( dashed lines ) .note that the moment equations apply even when there are as many as 10 hydrogen atoms on a grain .the parameters used in the simulations are ( sites ) , ( atoms s ) , and . the activation energies for diffusion and desorption were taken as , , , , and mev .the grain temperature was .the parameters used for hydrogen are the experimental results for low density amorphous ice . for the other speciesthere are no concrete experimental results , and the values reflect the tendency of heavier species to bind more strongly .the fluxes and grain temperatures are suitable for dense molecular clouds .consider the case in which a flux of co molecules is added to the network .this gives rise to the network shown in fig .[ fig:2 ] , which includes the following sequence of hydrogen addition reactions : h + co hco , h + hco h , h + h h and h + h ch .two other reactions that involve oxygen atoms also take place : o + co co and o + hco co + h. this network was studied before using the multiplane method , which required about a thousand equations compared to about a million equations in the master equation with similar cutoffs .the moment equations include one equation for each node and one for each edge , namely , the network shown in fig . [ fig:2 ] requires only 17 equations .we have performed extensive simulations of this network using the moment equations and found that they are in excellent agreement with the master equation . in fig .[ fig:2](a ) we present the moment - equation results for the population sizes of h , o and co on a grain vs. grain diameter for the methanol network . in fig .[ fig:2](b ) we present the moment - equation results for the production rates per grain of some of the final products of the network .the results are in excellent agreement with the master equation ( solid lines ) , and coincide with the rate equations ( dashed lines ) for large grains .the activation energies for diffusion and desorption of co ( ) , hco ( ) , h ( ) and h ( ) were taken as , , , , , , and mev .the flux of co molecules was taken as .note that experiments on co desorption from ice surfaces indicate that should be slightly higher than the value used here .however , it turns out that using a higher value would compromise the feasibility of the master equation simulations , which are required here in order to establish the validity of the moment equation method .when simulating highly complex networks one encounters the problem of obtaining the equations themselves .the ability to automate the construction of the equations becomes important .a crucial advantage of the moment equations is that they can be easily obtained using a diagrammatic approach .a detailed presentation of the diagrammatic method will be given in barzel & biham ( 2007 ) .in summary , we have introduced a method , based on moment equations , for the simulation of chemical networks taking place on dust - grain surfaces in interstellar clouds .the method provides highly efficient simulations of complex reaction networks under the extreme conditions of low gas density and sub - micron grain sizes , in which the reaction rates are dominated by fluctuations and stochastic simulations are required .the number of equations is reduced to one equation for each reactive specie and one equation for each reaction , which is the lowest possible number for such networks .this method enables us to efficiently simulate networks of any required complexity without compromising the accuracy .it thus becomes possible to incorporate the complete network of surface reactions into gas - grain models of interstellar chemistry . to fully utilize the potential of this method ,further laboratory experiments are needed , that will provide the activation energy barriers for diffusion , desorption and reaction processes not only for hydrogen but for all the molecules involved in these networks . ) and o ( ) atoms on a grain vs. grain diameter , ( and the number of adsorption sites , ) , obtained from the moment equations for the network shown above , where nodes represent reactive species , edges represent reactions and the products are indicated near the edges ; ( b ) the production rates of h ( squares ) , o ( triangles ) and h ( circles ) on a grain vs. grain diameter , obtained from the moment equations .the results are in excellent agreement with the master equation ( solid line ) . in the limit of large grainsthey also coincide with the rate equations ( dashed line ) .[ fig:1 ] , width=480 ] ) , o ( ) and co ( ) for the methanol network shown above , vs. grain diameter , ( and the number of adsorption sites , ) , obtained from the moment equations .( b ) the production rates of several final products of the methanol network per grain vs. grain diameter . obtained from the moment equations .the results are in excellent agreement with the master equation ( solid line ) . in the limit of large grainsthey also coincide with the rate equations ( dashed line ) .[ fig:2 ] , width=480 ] | networks of reactions on dust grain surfaces play a crucial role in the chemistry of interstellar clouds , leading to the formation of molecular hydrogen in diffuse clouds as well as various organic molecules in dense molecular clouds . due to the sub - micron size of the grains and the low flux , the population of reactive species per grain may be very small and strongly fluctuating . under these conditions rate equations fail and the simulation of surface - reaction networks requires stochastic methods such as the master equation . however , the master equation becomes infeasible for complex networks because the number of equations proliferates exponentially . here we introduce a method based on moment equations for the simulation of reaction networks on small grains . the number of equations is reduced to just one equation per reactive specie and one equation per reaction . nevertheless , the method provides accurate results , which are in excellent agreement with the master equation . the method is demonstrated for the methanol network which has been recently shown to be of crucial importance . |
the space of all hermitian matrices with probability density function is known as the _ gaussian unitary ensemble _ ( or gue) . here and henceforth the variance is chosen to ensure that asymptotically for , the limiting mean density of the gue eigenvalues is given by the wigner semicircle law supported in the interval ] on the real line as .our main result is the following [ pred ] consider the random variable }\bigg\{2\log|p_{n}(x)|-2\mathbb{e}(\log|p_{n}(x)|)\bigg\ } \label{maxrv}\ ] ] then in the limit we have where is a continuous random variable characterized by the two - sided laplace transform of its probability density : where and stand for the euler gamma - function and the barnes digamma - function , correspondingly .the normalization can be evaluated explicitly as where is the glaisher - kinkelin constant .the product form of the laplace transform offers an interesting interpretation of the above results .noting that is the moment generating function of a standard gumbel random variable , we can write where is an independent random variable with two - sided laplace transform in the end of the paper we provide convincing numerical evidence that this laplace transform does indeed define a unique random variable .this immediately implies that the probability density of is the convolution of a gumbel random variable with .such a convolution structure is expected to appear universally when studying the extreme value statistics of logarithmically correlated gaussian fields , see the discussion around and after eq .( [ lcdef ] ) . in recent years, much interest has accumulated regarding the statistical behaviour of characteristic polynomials of various random matrices as a function of the spectral variable . to a large extentthis interest was stimulated by the established paradigm that many statistical properties of the riemann zeta function along the critical line , that is , can be understood by comparison with analogous properties of the characteristic polynomials of random matrices .for invariant ensembles of self - adjoint matrices with real eigenvalues , statistical characteristics of depend very essentially on the choice of scale spanned by the real variable . from that endit is conventional to say that spans the _ local _ ( or _ microscopic _ ) scale if one considers intervals containing in the limit typically only a finite number of eigenvalues ( the corresponding scale for gue in ( [ guedensity ] ) is of the order of ) . at such scales , standard objects of interestare correlation functions containing products and ratios of characteristic polynomials , which show determinantal / pfaffian structures for hermitian / real symmetric matrices and tend to universal limits at the local scale .similar structures arise for properly defined characteristic polynomials of circular ensembles ( like cue , coe , and cse) of unitary random matrices uniformly distributed with respect to the haar measure on ( and other classical groups ) , whose properties on the local scale are indistinguishable from their hermitian counterparts . next , when spans an interval containing in the limit typically of order of eigenvalues one speaks of the _ global _ ( or _ macroscopic _ ) scale behaviour .at such a scale properties of display both universal and non - universal features , the latter depending on the ensemble chosen .the study of characteristic polynomials at such a scale was initiated in where it was shown that the function , with belonging to the cue , converges ( in an appropriate sense ) to a random gaussian fourier series of the form where the coefficients are independent standard complex gaussian random variables , i.e. , and .the covariance structure associated with such a process is given by as long as .such a ( generalized ) random function is a representative of random processes known in the literature under the name of _1/f noises _ , see for background discussion and further references .recently the study of the global scale behaviour was extended to the gue polynomial in by using earlier insights from and . that work revealed again a structure analogous to that of ( [ 1/f ] ) , though different in detail .namely , it was shown that the natural limit of is given by the random chebyshev - fourier series with being chebyshev polynomials and real being independent standard gaussians .a quick computation shows that the covariance structure associated with the generalized process is given by an integral operator with kernel as long as .such a limiting process is an example of an aperiodic -noise .finally , one can consider an intermediate , or _spectral scales , with intervals typically containing in the limit the number of eigenvalues growing with , but representing still a vanishingly small fraction of the total number of all eigenvalues .the properties of the characteristic polynomials at such scales were again addressed in where it was shown that for the gue , that object gives rise to a particular ( singular ) instance of the so - called fractional brownian motion ( fbm ) with the hurst index , again characterized by correlations logarithmic in the spectral parameter .the discussion above serves , in particular , the purpose of pointing to an intimate connection between gaussian random processes with logarithmic correlations and the modulus of characteristic polynomials at global and mesoscopic scales .the relation is important as logarithmically correlated gaussian ( lcg ) random processes and fields attract growing attention in mathematical physics and probability and play an important role in problems of quantum gravity , turbulence , and financial mathematics , see e.g. . in particular , the periodic noise ( [ 1/f ] ) emerged in constructions of conformally invariant planar random curves . among other things ,the statistics of the global maximum of lcg fields attracted considerable attention , see and references therein . particularly relevant in the present contextare the results of ding , roy and zeitouni on the maxima of regularized lattice versions of lcg fields which we discuss informally below .let be the box of side length with the left bottom corner located at the origin .a suitably normalized version of the logarithmically correlated gaussian field is a collection of gaussian variables with variance and covariance structure where and both and are continuous bounded functions far enough from the boundary of .now set and .the limiting law of is then expected , after an appropriate shift and rescaling , to be given by the _gumbel distribution with random shift _ : where the distribution of the random shift variable depends on details of the behaviour of covariance ( [ lcdef ] ) for and , see the detailed discussion in .the random variable is related to the so called _ derivative martingale _ associated with the lcg fields whose distribution is however not known .recently it has been shown that the recentering term in also holds for a randomized model of the riemann zeta function , proved by revealing a special branching structure within the associated logarithmic correlations .we see that our conjecture [ pred ] for the maximum of characteristic polynomial of large gue matrices fully agrees with the predicted structure of the maximum of lcg in dimension .note that the expression ( [ lcmax ] ) implies that the double - sided laplace transform of the density for the ( shifted ) maximum is related to the density of the random variable as which is in turn equivalent to the gumbel convolution in eq.([uprime ] ) .in fact our formula ( [ uprimelaplace ] ) provides the explicit form of the distribution for the derivative martingale of our model , thus going considerably beyond the considerations of . from a quite different perspective , processes similar to ( [ 1/f ] ) and ( [ 1/fch ] ) appeared in the context of statistical mechanics of disordered systems when studying extreme values of random multifractal landscapes supporting spinglass - like thermodynamics .the latter link is especially important in the context of the present paper .the idea that it is beneficial to look at as a disordered landscape consisting of many peaks and dips , and to think of an associated statistical mechanics problem was put forward in .it allowed to get quite non - trivial analytical insights into statistics of the maximal value of the cue polynomial sampled over the full circle ] with hugely varying peaks heights .this produces considerable clusterings of ` near - maxima ' which may confuse any naive attempt to find the true maximum value .secondly , the slow changing nature of the correction terms in conjecture [ pred ] , of order and ) respectively , require one to go to somewhat large matrices to resolve reasonable asymptotic behaviour .the problem is further compounded by the numerical instability of calculating determinants of such matrices .our solution to these problems heavily relies on a sparse realization of gue matrices originally due to trotter ( see also dumitriu and edelman ) .he discovered that the eigenvalues of gue matrices have the same joint probability density as those of the following real symmetric tri - diagonal matrix : where is a normal random variable with mean and variance .the sub - diagonal is composed of random variables having the same density as where is a -square random variable with degrees of freedom . to compute the maximum value of , we begin by exploiting the known asymptotic behaviour so that further progress is now possible thanks to the fact that determinants of tri - diagonal matrices satisfy a linear recurrence relation .furthermore , by an appropriate rescaling , the recursion computes determinants of all leading principal minors simultaneously , thus computing _ for all _ in linear time . now to find the maximum , we define a mesh with and evaluate at each of the points in . at those points where is maximal the matlab function ` fminbnd ' is invoked to converge onto the global maximum .figure [ fig : n3000gueres ] illustrates the complexity of the problem .our algorithm is sufficiently precise to distinguish the true maximum ( located at in red ) from other possible candidates , _e.g. _ as well as the thousands of other local maxima .d. carpentier and p. le doussal .glass transition of a particle in a random potential , front selection in nonlinear renormalization group , and entropic phenomena in liouville and sinh - gordon models . , 026110 , 33pp ( 2001 ) y. v. fyodorov , p. le doussal , and a. rosso .tatistical mechanics of logarithmic rem : duality , freezing and extreme value statistics of 1/f noises generated by gaussian free fields .10 , p10005 , 32 pp ( 2009 ) | motivated by recently discovered relations between logarithmically correlated gaussian processes and characteristic polynomials of large random matrices from the gaussian unitary ensemble ( gue ) , we consider the problem of characterising the distribution of the global maximum of as and . we arrive at an explicit expression for the asymptotic probability density of the ( appropriately shifted ) maximum by combining the rigorous fisher - hartwig asymptotics due to krasovsky with the heuristic _ freezing transition _ scenario for logarithmically correlated processes . although the general idea behind the method is the same as for the earlier considered case of the circular unitary ensemble , the present gue case poses new challenges . in particular we show how the conjectured _ self - duality _ in the freezing scenario plays the crucial role in our selection of the form of the maximum distribution . finally , we demonstrate a good agreement of the found probability density with the results of direct numerical simulations of the maxima of . |
in ( see appendix for an authors version of this article ) , we proposed a maximum likelihood approach for blindly separating a linear - quadratic mixture defined by ( eq .( 2 ) in ) : where and are two independent sources .the log - likelihood for samples of the mixed signals and reads ( eq . ( 12 ) in ) : +e_t[\log{f_{s_2}(s_2(t ) ) } ] -e_t[\log{|j(s_1(t),s_2(t))| } ] \label{finalcost_e.eq}\ ] ] where ] , _ i.e. _ , vanishes . defining the score functions of the two sources as ( eq .( 13 ) in ) we can write ( eq . ( 14 ) in ) -e_t[\psi_2(s_2)\frac{\partial s_2}{\partial { \bf w}}]-e_t[\frac{1}{j}\frac{\partial j}{\partial { \bf w } } ] \label{gradient_e.eq}\ ] ] rewriting ( [ mixture_model_e.eq ] ) in the vector form and considering as the independent variable and as the dependent variable , we can write , using implicit differentiation ( eq . (15 ) in ) which yields ( eq . ( 16 ) in ) note that is the jacobian matrix of the mixing model . considering ( [ mixture_model_e.eq ] ) , we can write ( appendix in ) + and , which implies , from ( [ dsdw_e.eq ] ) and yields ( eq . ( 19 ) in ) [] } \label{djdw_scte_e.eq}\ ] ] we now compute the gradient ( [ djdw_e.eq ] ) entirely . considering ( [ jacobian_e.eq ] ) , we can write \label{djdws_e.eq}\ ] ] using ( [ djdw_e.eq ] ) , ( [ djdw_scte_e.eq ] ) , ( [ djdws_e.eq ] ) and ( [ dsdw_new_e.eq ] ) we finally obtain the following equation which must replace the equation ( 20 ) in \nonumber \\\label{djdw_new_e.eq}\end{aligned}\ ] ] inserting ( [ ds1s2dw_e.eq ] ) and ( [ djdw_new_e.eq ] ) in ( [ gradient_e.eq ] ) , we obtain the following expression for the gradient which must replace equation ( 17 ) in ] .\3 ) for some values of the sources and for the other values . in this case, each structure leads to the non - permuted sources ( [ pair1.eq ] ) for some values of the observations and to the permuted sources ( [ pair2.eq ] ) for the other values .an example is shown in fig .[ case3.fig ] ( with the same coefficients as in the second case , but for ) .the permutation effect is clearly visible in the figure .one may also remark that the straight line in the source plane is mapped to a conic section in the observation plane ( shown by asterisks ) .thus , it is clear that the direct structures may be used for separating the sources if the jacobian of the mixing model is always negative or always positive , _i.e. _ for all the source values .otherwise , although the sources are separated _ sample by sample _, each retrieved signal contains samples of the two sources .this problem arises because the mixing model ( [ mixture_model.eq ] ) is not bijective .this theoretically insoluble problem should not discourage us .in fact , our final objective is to extend the idea developed in the current study to more general polynomial models which will be used to approximate the nonlinear mixtures encountered in the real world .if these real - world nonlinear models are bijective , we can logically suppose that the coefficients of their polynomial approximations take values which make them bijective on the variation domains of the sources .thus , in the following , we suppose that the sources and the mixture coefficients have numerical values ensuring that the jacobian of the mixing model has a constant sign .the natural idea to separate the sources is to form a direct separating structure using any of the equations in ( [ inverse.eq ] ) , and to identify the parameters , , and by optimizing an independence measuring criterion .although this approach may be used for our special mixing model ( [ mixture_model.eq ] ) , as soon as a more complicated polynomial model is considered , the solutions can no longer be determined so that the generalization of the method to arbitrary polynomial models seems impossible . to avoid this limitation , we propose a recurrent structure shown in fig .[ model.fig ] .note that , for , this structure is reduced to the basic hrault - jutten network .it may be checked easily that , for fixed observations defined by ( [ mixture_model.eq ] ) , and corresponds to a steady state for the structure in figure [ model.fig ] .the use of this recurrent structure is more promising because it can be easily generalized to arbitrary polynomial models .however , the main problem with this structure is its stability .in fact , even if the mixing model coefficients are exactly known , the computation of the structure outputs requires the realization of the following recurrent iterative model where a loop on is performed for each couple of observations until convergence is achieved .it can be shown that this model is locally stable at the separating point , if and only if the absolute values of the two eigenvalues of the jacobian matrix of ( [ recurrent.eq ] ) are smaller than one . in the following ,we suppose that this condition is satisfied .let be the joint pdf of the sources , and assume that the mixing model is bijective so that the jacobian of the mixing model has a constant sign on the variation domain of the sources .the joint pdf of the observations can be written as taking the logarithm of ( [ pdf1.eq ] ) , and considering the independence of the sources , we can write : given n samples of the mixtures and , we want to find the maximum likelihood estimator for the mixture parameters ] as \ ] ] using ( [ logf1.eq ] ) : +e_t[\log{f_{s_2}(s_2(t ) ) } ] -e_t[\log{|j(s_1(t),s_2(t))| } ] \label{finalcost.eq}\ ] ] maximizing this cost function requires that its gradient with respect to the parameter vector , _ i.e. _ , vanishes . defining the score functions of the two sources as and considering that , we can write -e_t[\psi_2(s_2)\frac{\partial s_2}{\partial { \bf w}}]-e_t[\frac{1}{j}\frac{\partial j}{\partial { \bf w } } ] \label{gradient.eq}\ ] ] rewriting ( [ mixture_model.eq ] ) in the vector form and considering as the independent variable and as the dependent variable ,we can write , using implicit differentiation which yields note that is the jacobian matrix of the mixing model . using ( [ gradient.eq ] ) and ( [ dsdw.eq ] ) , the gradient of the cost function with respect to the parameter vector is equal to ( see the appendix for the computation details ) } \nonumber \\ \frac{\partial s_2}{\partial { \bf w}}=\frac{1}{j}\mbox{\huge}(l_2+q_2s_2)s_2 \;,\ ; ( 1-q_1s_2)s_1 \ ; , ( l_2+q_2s_2)s_1s_2 \;,\ ; ( 1-q_1s_2)s_1s_2 \mbox{\huge } \label{ds1s2dw.eq}\end{aligned}\ ] ] considering ( [ jacobian.eq ] ) $ } \label{djdw.eq}\end{aligned}\ ] ] ( [ dldw.eq ] ) follows directly from ( [ gradient.eq ] ) , ( [ ds1s2dw.eq ] ) and ( [ djdw.eq ] ) . | an error occurred in the computation of a gradient in . the equations ( 20 ) in appendix and ( 17 ) in the text were not correct . the current paper presents the correct version of these equations . |
the main goal of these sessions will be to understand the dynamics of a system exhibiting a continuum of periodic orbits when we add a small periodic forcing .the most paradigmatic example is probably the perturbed pendulum ; however , such systems massively appear in real applications , specially in celestial mechanics .this has given rise to classical problems exhibiting extremely rich dynamics , such as the restricted three body problem .in these sessions we will see some theoretical results , but we will mainly visualize them through analytical and numerical computations for a particular example , the forced pendulum : where is a small parameter , and a periodic forcing : .we will mainly see that , when the face portrait looks like figure [ fig : pendulum ] ( right ) and that , when looks like figure [ fig : pendulum ] ( left ) . we will learn theoretical and numerical techniques to compute the surviving resonant periodic orbits ( big holes in figure [ fig : pendulum ] left ) . +although everything we will see will be generic for any , we will fix this forcing from now on say to ant becomes .+ it will be useful to write the system as a first order one by increasing the dimension .let , then we can write it as will start with some exercises in order to understand the dynamics of the unperturbed pendulum ( ) .consider the unperturbed equations of the pendulum by setting in eq .: 1 .obtain the equilibrium points and state their type .2 . obtain a hamiltonian for system . 1 .the system possesses an elliptic equilibrium point at the origin and two saddle points at .the function is the hamiltonian of system up to a constant value .write a small program in matlab to plot the phase portrait of system .see . in this sessionwe will focus on what happens with the periodic orbits inside the homoclinic loop when we add a small periodic forcing to system ( say ) . to this end, it will be crucial to compute the periods of the periodic orbits for .let s us now see a general theoretical approach to obtain a formula .assume we have hamiltonian system of the form where is a potential energy .assume that we have a periodic orbit at level of energy given by .then , we can compute its period , , as follows : using that the system is hamiltonian and , hence , we get isolating from , but now we have two problems .first is that solving such an integral explicitly can be a nightmare , if possible at all .second , doing integrals numerically is a difficult task , it s slow and imprecise . fortunately , there is an alternative : compute a poincar map using a transversal section to the periodic orbit and capture the return time .write a small program in matlab to compute the periods of the periodic orbits using the poincar map from the section to itself . in general, for autonomous systems , in order to study the existence of periodic orbits one typically considers the poincar map : where is a co - dimension one section . in our case, we could take the vertical axis .then , fixed points of the poincar map , points such that , become initial conditions for periodic orbits for the flow , whose period is the flying time .periodic points of the poincar map , , also give rise to periodic orbits for the flow , which cross times the poincar section and their period becomes the addition of all flying times between consecutive impacts with the poincar section before is reached again .+ however , if the system is not autonomous ( but periodic ) , then one needs to take into account the initial time and consider poincar sections of the form , with .then , a sufficient condition for the existence of a periodic orbits becomes .that is , after the point is reached again after crossings and the total spent time is a multiple of .then , the flow possesses an -periodic orbit crossing the section times .note that , if total time spent to reach again is not a multiple of , then nothing can be said about the existence of a periodic orbit .one of inconveniences of using poincar maps with non - autonomous systems is the need of computing the flying time .although this can be done numerically , it requires extra computations than simply numerically integrating a flow , as one needs to compute the crossing with the section .alternatively , one can use the stroboscopic map , which consists of integrating the system for a time : where then , if , then is the initial condition for a -periodic orbit : write a small program in matlab to compute the stroboscopic map . write also a script to iterate it several times ( say ) for different initial conditions at the axis . note the resonances( probably you will need to play with to observe things better ) .first use , and then increase it a little bit and see what happens .let us consider a planar field of the form where is a small parameter and is -periodic in : for simplicity , let us assume that , for , the unperturbed system is hamiltonian .that is , there exists a function such that moreover , let us assume that the unperturbed system satisfies the following : 1 .there exists a compact region completely covered by a continuum of periodic orbits .2 . let be the period of the periodic orbit located at the energy level : assume that .then we have the following result : [ theo : melnikov ] assume the above conditions are satisfied .assume that the unperturbed system has a periodic orbit , , of period where is the period of the periodic forcing .let be an initial condition for such a periodic orbit , and let be the flow at such initial condition .then let us define the so - called ( subharmonic ) melnikov function then , if there exists such that 1 . 2 . , then , for small enough , the perturbed system possesses a periodic orbit of period with initial condition at [ ex : analytical_melnikov ] compute the analytical expression of te melnikov function for system ( you do nt have to do the integral ! ) .write a little program in matlab to numerically compute the melnikov integral obtained in exercise [ ex : analytical_melnikov ] .we now want to numerically compute the initial conditions ( ) given by theorem [ theo : melnikov ] . herewe describe a newton method to do that in a general setting : -dimensional not necessary hamiltonian systems .assume we want to compute a periodic orbit of an -dimensional non - autonomous periodic system of the form where is a -periodic field in : , for any .+ due to the periodicity , instead of using poincar maps in the state space it will be much more convenient to use the so - called stroboscopic map , which is indeed a poincar map but using a section in time .this map is given by flowing system for a time with initial condition : provided that system is -periodic in , becomes a map from the time section to itself : where imagine that we want to compute the initial condition , for a periodic orbit of system at the section .recalling its periodicity , the period of such a periodic orbit must be a multiple of , say .in other words , we must look for periodic points of the map , that is , points such that one of the most extended methods for numerically computing such points is the newton method , assuming that one has some idea about where such a points lies , as we need a first approximation for the newton method to converge where we want .let us assume that we are looking for a -periodic orbit ; that is , and we want to find a fixed point of the map .then , we want to solve the equation the newton method consists of considering the linear approximation of the function around some point ( which is a first approximation of the solution we are looking for ) and solve the linear system instead .this provides a new point , which , hopefully , is a more accurate solution than the initial one .+ the linear approximation of equation around becomes where in section [ sec : variational ] we will see how to compute the differential .+ if we solve equation for we get this leads to an iterative process which converges quadratically to provided that is a good enough approximation .+ alternatively , some programing environments ( like matlab ) offer routines to solve linear equations which might be more efficient than computing the inverse . in this case , the linear system to be solved would be [ rem : df_invertibility ] note that the newton method requires the matrix to be invertible .this implies two things : * must be invertible at the starting point * must be invertible at the fixed point !this implies that the newton method will have troubles if the fixed point we are looking for is a center , as the eigenvalues of would have real part equal to one . in other words, the periodic orbit has to hyperbolic .similarly we can apply the newton method to solve the equation to get the same expression as in eq . .however , in this case the computation of becomes now a bit more tricky . using that , we apply the chain rule to get se we need to evaluate the differential at the points for .+ although there is nothing wrong with this approach from the theoretical point of view , in next section we will see a numerical method to compute which makes the computation of straightforward , with needing to multiply matrices ( see remark [ rem : varia_nperiodic ] below ) .now the question arises , how do we compute ?note that the flow is straightforward to differentiate with respect to , as one recovers the field , but we need to differentiate it with respect to the initial condition ! but we can do the following . applying the fundamental theorem of calculus and the definition of the flow , we can write if we now differentiate with respect to , we get where is the identity matrix and we write to emphasize that we differentiate with respect to . again , by applying the fundamental theorem of calculus backwards , we realize that equation is the solution of the differential equation at .equation is called the ( first ) variational equation . + some remarks : if is a field in , then this equation becomes and -dimensional differential equation .equation is evaluated along the flow , which is unknown .hence , this equation needs to be solved together with the equation , leading to a system of dimension with initial condition .[ rem : varia_nperiodic ] if we want to compute , we just need to integrate the variational equations from to !write a program in matlab in order to compute a fixed point of the stroboscopic map .note that the initial sead will be taken from a zero of the melnikov function , which may have several .you will need to tune to guarantee that this fixed point exists .compute the eigenvalues of at the fixed points to tell their type .as noted in remark [ rem : df_invertibility ] , the newton method to find fixed points ( or periodic orbits ) of the stroboscopic map will fail if that one is non - hyperbolic : that is , the associated eigenvalues of the stroboscopic map have real part equal to .alternatively , we can use a poincar map using a section in the state space .this method is more robust in that sense , but , as we will show below , the computation of the differential becomes slightly more tricky .let us consider the poincar map where {c } \varphi_1(t;x_0,t_0)\\\varphi_2(t;x_0,t_0 ) \end{array } \right ) \label{eq : perturbed_flow_notation}\ ] ] is the solution of the perturbed system with initial condition at and is such that .it can be easily done using a newton method to solve equation .below we show how to compute the necessary derivatives for the newton method .otherwise , it can be computed using the `` events '' functionality of matlab . ] for the second time , as we consider a return map in the direction that the section is left .recall that system is non - autonomous .hence , the initial time matters and we have to carry it on . from now on, we will abuse notation in equation and omit writing the first coordinate , as it takes the . taking into account the periodicity of the non - autonomous system , an initial condition at for of a periodic orbit of the full system will need to satisfy {c } v_0\\t_0+mt \end{array } \right ) \label{eq : periodic_orbit_poinc - map}\ ] ] for some .indeed , the integers and are the same as in previous sections , made explicit through the congruency equation .here one clearly see that the role of is to count the `` loops '' that a periodic orbit of period makes around the origin .applying the newton method to the equation {c } v_0\\t_0+mt \end{array } \right)=\left ( \begin{array}[]{c } 0\\0 \end{array } \right ) , \label{eq : newton_equation_poinc - map}\ ] ] and arguing as in section [ sec : hyperbolic_po ] we get the iterative process where now {cc } 1&0\\0&1 \end{array } \right).\ ] ] we now wonder , how do we compute ? let us see it for first .+ recalling that , we get {cc } d_{v_0}\varphi_2(t^*;0,v_0,t_0 ) & d_{t_0}\varphi_2(t^*;0,v_0,t_0)\\ \frac{\partial}{\partial v_0}t^ * & \frac{\partial}{\partial t_0}t^*\\ \end{array } \right).\ ] ] note that , in the first row , we have written the total derivatives and instead of partial ones , and , because actually depends on and through equation .let us compute such total derivatives .+ for the first one we get let us now see how to compute all the terms appearing in this equation .+ on one hand , where is the second coordinate of the field evaluated at the image of the poincar map .+ on the other one , is given by integrating the variational equations from to , as we are differentiating with respect an initial condition .+ what about ? recall that , for given and , ( which we somehow know how to compute ) solves equation .therefore , assuming that we can use the implicit function theorem to get note that , condition is satisfied , as the flow is transversal to the section .if it were tangent , then we would have a problem , of course !+ the denominator of the last equation is just the first coordinate of the perturbed field evaluated at the image of the poincar map .+ regarding , note that it is a derivative with respect to the initial time , .adding time as a variable , this can be computed integrating the variational equations of the system where now plays the role of time and becomes .+ proceeding similarly , the elements of the second row of become and is already given in equation + the differential can be computed proceeding similarly as in section [ sec : differential_po ] , mutliplying evaluated at the iterates or integrating the previous variational equations until the -th impact with the section occurs .write a program in matlab to compute initial conditions for periodic orbits by performing a newton method to equation .for that you will need to slightly modify the programs you wrote in the previous exercises . | these notes were written during the 9th and 10th sessions of the subject dynamical systems ii coursed at dtu ( denmark ) during the winter semester 2015 - 2016 . they aim to provide students with a theoretical and numerical background for the computation of periodic orbits using newton s method . we focus on periodically perturbed quasi - integrable systems ( using the forced pendulum as an example ) and hence we take advantage of the melnikov method to get first guesses . however , these well known techniques are general enough to be applied in other type of systems . periodic orbits are computed by solving a fixed - point equation for the stroboscopic map , which is very fast and precise for hyperbolic periodic orbits . however , for non - hyperbolic ones the method fails and we use the poincar map instead . in both cases we show how to compute the jacobian of the maps , which is necessary for the newton method , by means of variational equations and the implicit function theorem . + some exercises are proposed along the notes , whose solutions can be found in github.com/a-granados . + the notes themselves do not contain any reference , although everything described here is well known in the dynamical systems community . a typical reference for the melnikov method for subharmonic orbits is the book . more about the variational equations and their numerical applications can be found in the notes . |
a gp is a probabilistic map parametrised by a covariance and a mean .we use and automatic relevance determination ( ard ) in the following . here , and denote the signal and noise variance , respectively and the diagonal matrix contains the squared length scales .since a gp is a distribution over functions , the output is random even though the input is deterministic . in gp regression , a gp prioris combined with training data into a gp posterior conditioned on the training data with mean and covariance where ^{\top} ] , {ij=1 .. n} ] .deterministic inputs lead to gaussian outputs and gaussian inputs lead to non - gaussian outputs whose first two moments can be computed analytically for ard covariance .multivariate deterministic inputs lead to spherical gaussian outputs and gaussian inputs lead to non - gaussian outputs whose moments are given by : here , ^{\top}=[\mathbf{z}^{1}, .. ,\mathbf{z}^{n}]\mathbf{k}^{-1} ] and ] w.r.t .the gaussian input distribution that can readily be evaluated in closed form as detailed in the appendix . in the limit of recover the deterministic case as , and .non - zero input variance results in full non - spherical output covariance , even for independent gps because all the gps are driven by the same ( uncertain ) input .a gplvm is a successful and popular non - parametric bayesian tool for high dimensional nonlinear data modeling taking into account the data s manifold structure based on a low - dimensional representation .high dimensional data points , ] from a low - dimensional latent space mapped into by independent gps one for each component of the data .all the gps are conditioned on and share the same covariance and mean functions .the model is trained by maximising the sum of the log marginal likelihoods over the independent regression problems with respect to the latent points .the high dimensional ( ) density of the data points in panel ( c ) is modelled by a mixture of gaussians shown in panel ( b , d ) where the means and variances are given by the predictive means and covariances of a set of independent gaussian processes conditioned on low - dimensional ( ) latent locations . a latent dirac mixture ( a )yields a spherical gaussian mixture with varying widths ( b ) and a latent gaussian mixture ( e ) results in a fully coupled mixture model ( d ) smoothly sharing covariances across mixture components.,scaledwidth=100.0% ] while most often applied to nonlinear dimensionality reduction , the gplvm can also be used as a tractable and flexible density model in high dimensional spaces as illustrated in figure [ fig : dgplvm ] .the basic idea is to interpret the latent points as centres of a mixture of either dirac ( figure [ fig : dgplvm]a ) or gaussian ( figure [ fig : dgplvm]e ) distributions in the latent space that are _ projected forward _ by the gp to produce a high dimensional gaussian mixture in the observed space .depending on the kind of latent mixture , the density model will either be a mixture of spherical ( figure [ fig : dgplvm]b ) or full - covariance gaussians ( figure [ fig : dgplvm]d ) . by that mechanism, we get a tractable high dimensional density model : a set of low - dimensional coordinates in conjunction with a probabilistic map yield a mixture of high dimensional gaussians whose covariance matrices are smoothly shared between components .as shown in figure [ fig : dgplvm](d ) , the model is able to capture high dimensional covariance structure along the data manifold by relatively few parameters ( compared to ) , namely the latent coordinates and the hyperparameters \in\mathbb{r}_{+}^{2d+2}$ ] of the gp .the role of the latent coordinates is twofold : they both _ define the gp _ , mapping the latent points into the observed space , and they _ serve as centres _ of the mixture density in the latent space .if the latent density is a mixture of gaussians , the centres of these gaussians are used to define the gp map , but the full gaussians ( with covariance ) are projected forward by the gp map .learning or model fitting is done by minimising a loss function w.r.t .the latent coordinates and the hyperparameters . in the following, we will discuss the usual gplvm objective function , make clear that it is not suited for density estimation and use leave - out estimation to avoid overfitting .a gplvm is trained by setting the latent coordinates and the hyperparameters to maximise the probability of the data + that is the product of the marginal likelihoods of independent regression problems . using , conjugate gradients optimisation at a cost of per step is straightforward but suffers from local optima .however , optimisation of does not encourage the gplvm to be a good density model . only indirectly, we expect the predictive variance to be small ( implying high density ) in regions supported by many data points .the main focus of is on faithfully predicting from ( as implemented by the fidelity trace term ) while using a relatively smooth function ( as favoured by the log determinant term ) .therefore , we propose a different cost function . density estimation constructs parametrised estimators from iid data .we use the kullback - leibler divergence to the underlying density and its empirical estimate as quality measure where emphasises that the full dataset has been used for training .this estimator , is prone to overfitting if used to adjust the parameters via .therefore , estimators based on subsets of the data are used .two well known instances are -fold cross - validation ( cv ) and leave - one - out ( loo ) estimation .the subsets for cv are and for loo .both of them can be used to optimise .there are two reasons why training a gplvm with the log likelihood of the data ( eq . [ eq : gplvm - lik ] ) is not optimal in the setting of density estimation : firstly , it treats the task as regression , and does nt explicitly worry about how the density is spread in the observation space .secondly , our empirical results ( see section [ sec : experiments ] ) indicate , that the test set performance is simply not good .therefore , we propose to train the model using the leave - out density + this objective is very different from the gplvm criterion as it measures how well a data point is explained under the mixture models resulting from projecting each of the latent mixture components forward ; the leave - out aspect enforces that the point gets assigned a high density even though the mixture component has been removed from the mixture .the leave - one - out idea is trivial to apply in a mixture setting by just removing the contribution in the sum over components , and is motivated by the desire to avoid overfitting .evaluation of requires assuming .however , removing the mixture component is not enough since the latent point is still present in the gp . using rankone updates to compute inverses and determinants of covariance matrices with row and column removed , it is possible to evaluate eq .[ eq : dgplvm - lpo ] for mixture components with latent point removed from the mean prediction which is what we do in the experiments .unfortunately , going further by removing also from the covariance increases the computational burden to because we need to compute rank one corrections to all matrices .since is only slightly smaller than , we refrain from computing it in the experiments . in the original gplvm, there is a clear one - to - one relationship between latent points and data points they are inextricably tied together .however , the leave - one - out ( loo ) density does not impose any constraint of that sort . the number of mixture components does not need to be , in fact we can choose any number we like .only the data visible to the gp is tied together .the actual latent mixture centres are not necessarily in correspondence with any actual data point .however , we can choose to be a subset of .this is reasonable because any mixture centre ( corresponding to the latent centre ) lies in the span of , hence should approximately span . in our experiments , we enforce .overfitting in density estimation means that very high densities are assigned to training points , whereas very low densities remain for the test points . despite its success in parametric models , the leave - one - out idea alone , is not sufficient to prevent overfitting in our model . when optimising w.r.t . using conjugate gradients , we observe the following behaviour : the model circumvents the loo objective by arranging the latent centres in pairs that take care of each other . more generally , the model partitions the data into groups of points lying in a subspace of dimension and adjusts such that it produces a gaussian with very small variance in the orthogonal complement of that subspace .by scaling to tiny values , can be made almost arbitrarily large .it is understood that the hyperparameters of the underlying gp take very extreme values : the noise variance and some length scales become tiny . in , this is penalised by the term , but is happy with very improbable gps . in our initial experiments , we observed this `` cheating behaviour '' on several of datasets .we conclude that even though the loo objective ( eq . [ eq : dgplvm - lpo ] ) is the standard tool to set kde kernel widths , it breaks down for too complex models .we counterbalance this behaviour by leaving out not only one point but rather points at a time .this renders cheating tremendously difficult . in our experiments we use the leave--out ( lpo ) objective + ideally , one would sum over all subsets of size .however , the number of terms soon becomes huge : for .therefore , we use an approximation where we set and contains the indices that currently have the smallest value . all gradients and can be computed in when using .however , the expressions take several pages .we use a conjugate gradient optimiser to find the best parameters and .in the experimental section , we show that the gplvm trained with ( eq . [ eq : gplvm - lik ] ) does not lead to a good density model in general . using our training procedure ( section [ sub : overfit ] ,[ eq : dgplvm - lpo ] ) , we can turn it into a competitive density model .we demonstrate that a latent variance improves the results even further in some cases and that on some datasets , our density model training procedure performs better than all the baselines .we consider data sets , frequently used in machine learning .the data sets differ in their domain of application , their dimension , their number of instances and come from regression and classification . in our experiments, we do not use the labels .we do not only want to demonstrate that our training procedure yields better test densities for the gplvm .we are rather interested in a fair assessment of how competitive the gplvm is in density estimation compared to other techniques . as baseline methods, we concentrate on three standard algorithms : penalised fitting of a mixture of full gaussians ( ` gm ` ) , kernel density estimation ( ` kde ` ) and manifold parzen windows ` ( mp ) ` .we run these algorithms for three different type of preprocessing : raw data ( r ) , data scaled to unit variance ( ` s ` ) and whitened data ( ` w ` ) . we explored a large number of parameter settings and report the best results in table [ tab : baselines ] . in order to speed up em computations , we partition the dataset into disjoint subsets using the -means algorithm `` .we fitted a penalised gaussian to each subset and combined them using the relative cluster size as weight .every single gaussian has the form where and equal the sample mean and covariance of the particular cluster , respectively .the global ridge parameter prevents singular covariances and is chosen to maximise the loo log density .we use simple gradient descent to find the best parameter .the kernel density estimation procedure fits a mixture model by centring one mixture component at each data point .we use independent multi - variate gaussians : , where the diagonal widths are chosen to maximise the loo density .we employ a newton - scheme to find the best parameters .the manifold parzen window estimator tries to capture locality by means of a kernel .it is a mixture of full gaussians where the covariance of each mixture component is only computed based on neighbouring data points .as proposed by the authors , we use the -nearest neighbour kernel and do not store full covariance matrices but a low rank approximation with . as in the other baselines ,the ridge parameter is set to maximise the loo density .the results of the baseline density estimators can be found in table [ tab : baselines ] .they clearly show three things : ( i ) more data yields better performance , ( ii ) penalised mixture of gaussians is clearly and consistently the best method and ( iii ) manifold parzen windows offer only little benefit .the absolute values can only be compared within datasets since linearly transforming the data by results in a constant offset in the log test probabilities .we keep the experimental schedule and setting of the previous section in terms of the datasets , the fold averaging procedure and the maximal training set size .we use the gplvm log likelihood of the data , the lpo log density with deterministic latent centres ( ) and the lpo log density using a gaussian latent centres to optimise the latent centres and the hyperparameters .our numerical results include different latent dimensions , preprocessing procedures and different numbers of leave - out points . optimisation is done using conjugate gradient steps alternating between and . in order to compress the big amount of numbers ,we report the method with highest test density as shown in figure [ fig : test_density ] , only .each panel displays the log test density averaged over random splits for three different gplvm training procedures and the _ best _ out of 41 baselines ( penalised mixture , diag.+isotropic kde , manifold parzen windows with 36 different parameter settings ) as well as various mixture of factor analysers ( mfa ) settings as a function of the number of training data points . we report the maximum value across latent dimension , three preprocessing methods ( raw , scaled to unit variance , whitened ) and leave - out points . the gplvm training procedures are the following : -rd : stochastic leave--out density ( eq . [ eq : dgplvm - lpo ] with latent gaussians , ) , -det : deterministic leave--out density ( eq . [ eq : dgplvm - lpo ] with latent diracs , ) and : marginal likelihood ( eq . [ eq : gplvm - lik]).,scaledwidth=100.0% ] the most obvious conclusion , we can draw from the numerical experiments , is the bad performance of as a training procedure for gplvm in the context of density modeling .this finding is consistent over all datasets and numbers of training points .we get another conclusive result in terms of how the latent variance influences the final test densities could be fixed to because its scale can be modelled by . ] . only in the ` bodyfat` data set it is not beneficial to allow for latent variance .it is clear that this is an intrinsic property of the dataset itself , whether it prefers to be modelled by a spherical gaussian mixture or by a full gaussian mixture .an important issue , namely how well a fancy density model performs compared to very simple models , has in the literature either been ignored or only done in a very limited way . experimentally, we can conclude that on some datasets e.g. ` diabetes , sonar , abalone ` our procedure can not compete with a plain ` gm ` model .however note , that the baseline numbers were obtained as the maximum over a wide ( 41 in total ) range of parameters and methods .for example , in the ` usps ` case , our elaborate density estimation procedure outperforms a single penalised gaussian only for training set sizes .however , the margin in terms of density is quite big : on prewhitened data points with deterministic latents yields at , whereas full reaches at which is significantly above as obtained by the ` gm ` method since we work on a logarithmic scale , this corresponds to factor of in terms of density . while the baseline methods such as ` gm , ` ` kde ` and ` mp ` run in a couple of minutes for the ` usps ` dataset , training a gplvm with either or takes considerably longer since a lot of cubic covariance matrix operations need to be computed during the joint optimisation of .the gplvm computations scale cubically in the number of data points used by the gp forward map and quadratically in the dimension of the observed space .the major computational gap is the transition from to because in the latter case , covariance matrices of size have to be evaluated which cause the optimisation to last in the order of a couple of hours . to provide concrete timing results , we picked , , averaged over the datasets and show times relative to .note that the methods are run in a conservative fail - proof black box mode with gradient steps .we observe good densities after considerably less gradient steps already .another straightforward speedup can be obtained by carefully pruning the number of inputs to the models .we have discussed how the basic gplvm is not in itself a good density model , and results on several datasets have shown , that it does not generalise well .we have discussed two alternatives based on explicitly projecting forward a mixture model from the latent space .experiments show that such density models are generally superior to the simple gplvm . among the two alternative ways of defining the latent densities ,the simplest is a mixture of delta functions , which due to the stochasticity of the gp map results in a smooth predictive distribution .however , the resulting mixture of gaussians , has only axis aligned components .if instead the latent distribution is a mixture of gaussians , the dimensions of the observations become correlated .this allows the learnt densities to faithfully follow the underlying manifold .although the presented model has attractive properties , some problems remain : the learning algorithm needs a good initialisation and the computational demand of the method is considerable .however , we have pointed out that in contrast to the gplvm , the number of latent points need not match the number of observations allowing for alternative sparse methods . | density modeling is notoriously difficult for high dimensional data . one approach to the problem is to search for a lower dimensional manifold which captures the main characteristics of the data . recently , the gaussian process latent variable model ( gplvm ) has successfully been used to find low dimensional manifolds in a variety of complex data . the gplvm consists of a set of points in a low dimensional latent space , and a stochastic map to the observed space . we show how it can be interpreted as a density model in the observed space . however , the gplvm is not trained as a density model and therefore yields bad density estimates . we propose a new training strategy and obtain improved generalisation performance and better density estimates in comparative evaluations on several benchmark data sets . modeling of densities , aka unsupervised learning , is one of the central problems in machine learning . despite its long history , density modeling remains a challenging task especially in high dimensional spaces . for example , the generative approach to classification requires density models for each class , and training such models well is generally considered more difficult than the alternative discriminative approach . classical approaches to density modeling include both parametric and non parametric methods . in general , simple parametric approaches have limited utility , as the assumptions might be too restrictive . mixture models , typically trained using the em algorithm , are more flexible , but e.g. gaussian mixture models are hard to fit in high dimensions , as each component is either diagonal or has in the order of parameters , although the mixtures of factor analyzers algorithm may be able to strike a good balance . methods based on kernel density estimation are another approach , where bandwidths may be set using cross validation . the methods mentioned so far have two main shortcomings : 1 ) they typically do not perform well in high dimensions , and 2 ) they do not provide an intuitive or generative understanding of the data . generally , we can only succeed if the data has some regular structure , the model can discover and exploit . one attempt to do this is to assume that the data points in the high dimensional space lie on or close to some smooth underlying lower dimensional manifold . models based on this idea can be divided into models based on _ implicit _ or _ explicit _ representations of the manifold . an implicit representation is used by in a non - parametric gaussian mixture with adaptive covariance to every data point . explicit representations are used in the generative topographic map and by . within the explicit camp , models contain two separate parts , a lower dimensional latent space equipped with a density , and a function which maps points from the low dimensional latent space to the high dimensional space where the observations lie . advantages of this type of model include the ability to understand the structure of the data in a more intuitive way using the latent representation , as well as the technical advantage that the density in the observed space is automatically properly normalised by construction . the gaussian process latent variable model ( gplvm ) uses a gaussian process ( gp ) to define a ( stochastic ) map between a lower dimensional latent space and the observation space . however , the gplvm does not include a density in the latent space . in this paper , we explore extensions to the gplvm based on densities in the latent space . one might assume that this can trivially be done , by thinking of the latent points learnt by the gplvm as representing a mixture of delta functions in the latent space . since the gp based map is stochastic , it induces a proper mixture in the observed space . however , this formulation is unsatisfactory , because the resulting model is not trained as a density model . consequently , our experiments show poor density estimation performance . mixtures of gaussians form the basis of the vast majority of density estimation algorithms . whereas kernel smoothing techniques can be seen as introducing a mixture component for each data point , infinite mixture models explore the limit as the number of components increases and mixtures of factor analysers impose constraints on the covariance of individual components . the algorithm presented in this paper can be understood as a method for stitching together gaussian mixture components in a way reminiscent of using the gplvm map from the lower dimensional manifold to induce factor analysis like constraints in the observation space . in a nutshell , we propose a density model in high dimensions by transforming a set of low - dimensional gaussians with a gp . we begin by a short introduction to the gplvm and show how it can be used to define density models . in section [ sec : learning ] , we introduce a principled learning algorithm , and experimentally evaluate our approach in section [ sec : experiments ] . |
the study of localized surface plasmon ( lsp ) resonances over the last couple of decades has led to great advances in several areas of science and technology .probably , its most significant application is its use in nanosensors. most of the strategies used to turn a plasmonic device into a sensor are based on the shift of the resonant position of the lsp or in exploiting hot spots to increase spectroscopic signals. in this respect , there is a growing interest in using interacting plasmonic systems to measure distances at the nanoscale .these devices are known as nanorulers and they are finding interesting applications in several fields such as biology. most of the proposed plasmonic rulers work by the same principle , the shift of a resonance induced by the variation of a distance . as noticed originally by jain et .al., the frequency shift of the lsp resonance follows an almost universal exponential or quasi - exponential law with respect to the separation between nps .this behavior is known as the universal plasmon ruler equation ( upre ) and was observed in simulations and experiments using nps of different size and shape, core - shell nps, and even two dimensional arrays of nps. there are some deviations of the upre due to retardation effects or the effects of higher order multipoles , but they can usually be corrected by empirical fittings. only when nps are almost touching each other , there are some deviations , nonfitable to exponential , of the upre. there is another kind of plasmon rulers that is now being actively investigated .they are based on a different principle , the fluorescence resonance energy transfer ( fret ) .its working principle is the resonant transference of excitation from a fluorescent molecule to a nearby np .the closer to fluorophore to the np , the greater the photoluminescence quenching .this effect has been successfully used as a nanorulers in several experimental examples. collective effects induced by lsp couplings in nps arrays can give rise to other phenomena and new potential applications. one example is the use of fano resonances to design plasmonic sensors. in this case , the shift of very sharp peaks or valleys in the extinction spectrum , consequence of collective effects of interacting nps , have been proposed to measure distances or the dielectric constant of the surrounding medium. more exotic applications of nps coupling are also possible and many of them are now being actively explored , such as clocking by metamaterials or plasmonics circuitry. although , linear arrays of metallic nps have been proposed as potential waveguides to transfer massive amounts of information at the nanoscale, nowadays it is clear that high damping factors would impose severe limits to that and most of the effort in this field is being put on overcoming this limitation . however , one important and very general feature of 1-d systems , which seems to have been overlooked in plasmonics , is the extreme sensitivity of their dynamical properties to slight changes of their parameters . in this respect ,we have previously studied the plasmonics energy transfer from a locally excited np ( le - np ) , the 0th np in the scheme shown in [ figure-1 ] , to the interior of a semi - infinite np chain. this system presents a form of dynamical phase transition ( dpt ) , a phenomenon that is currently attracting interest as source of novel effects on various fields. basically a dpt in this context means that by sightly moving a single parameter , such as the shape of the nps or the distance between them , the system undergoes an abrupt transition from transferring all the injected excitation to the interior of the np array ( a delocalized state ) , to keeping all the external excitation in the nps closer to the excitation point ( a localized state). the aim of this work is to exploit this basic property into a realistic setting , in order to design a new kind of excitation - transfer plasmonic nanosensor ( etps ) .the system studied , depicted in [ figure-1 ] , is modeled using the coupled dipole approximation . in this model , each -np is described by a dipole induced by the electric field produced by the other dipoles , , and the external source , .we assume a generic ellipsoidal shape for the metallic nps and describe its polarizabilities in a quasi - static approximation, resulting in : } \left ( e_{i}^{(\mathrm{ext})}+\sum\limits_{j\neq i}e_{i , j } \right ) \label{pvector},\]]where is the volume , is a geometrical factor that depends on the shape of the np and the direction of , is the free space permittivity , and is the dielectric constant of the host medium around the -th np . the dielectric constants of the nps are described by a drude - sommerfeld model : , where is the plasmon frequency , is the electronic damping factor , and is a material dependent constant that take into account the contribution of the bound electrons to the polarizability . the lsp oscillations over each np can only be transverse ( ) or longitudinal ( ) to the axis of the linear array and we assume , for simplicity , that the ellipsoids axes are aligned with respect to these and directions .if the excitation wavelength is large compared with the separation between nps , can be taken in the near field approximation , , where , and is the distance between nps .taking into account these considerations , and can be arranged as vectors and resulting in: is the dynamical matrix and is a diagonal matrix that rescales the external applied field according to local properties : } { \left[\epsilon _ { m , i}+l_{i}(\epsilon_{\infty}-\epsilon _ { m , i})\right]} ] ) .the form of eqs .[ matrixp]-[omegax ] not only extends our previous expressions to non - spherical nps but also includes explicitly the interdependencies of , , and with the parameters of the system , _i.e. _ size , shape , and material of nps , distances between nps , and dielectric constants .it should also be mentioned that although this model can be further improved , the different corrections will only add quantitative corrections as long as the essential physics of the problem remains , _i.e. _ a semi - infinite chain with nearest neighbors interactions. we will analyze this point in more detail in the last section . in the present work we will consider the case of excitation injection to a semi - infinte homogeneous linear array of nps , where solely the le - np can be different from the rest andthe only nonidentical separation is that between the le - np and the first np of the chain .therefore , there will be two lsp resonances , that of the le - np and that of the chain s nps .similarly , the couplings between the chain s nps will be all the same .the couplings between the le - np and the first np of the chain are and .they are not necessarily equal ( see eq .[ omegax ] ) , since they describe quite different situations , _i.e. _ how affects ( ) and how affects ( ) .the values of the resonance frequency and the couplings can be tuned in different ways .however , one must be careful because they are always interrelated .for example , choosing a different material for the nps will change both and and therefore and . changingthe shape of the nps will modify and consequently and , except for in which case it will only modify .the volume and distances between nps will change .if the material , size and shape of nps are fixed , the free parameters that can be used for sensing purposes are the dielectric constant of the medium that alter both and , and the distances between nps that change couplings . the values of and define the passband which is the interval of frequencies at which an excitation can propagate with only a relatively small decay , given by , through the waveguide formed by the linear array of nps .all excitations with frequencies outside the passband will decay along the chain exponentially and very fast . in the weak damping limit ( wdl ) , this passband is given by a simple expression , .this fact shows one of the roles of the chain : to determine which frequencies can be propagated .the other role is to perturb the local density of plasmonic states ( ldps ) of the le - np by pushing the resonance from its uncoupled value at to the nearest edge of the passband. this process occurs very abruptly when the resonance is about to cross the passband edge that is when the dynamics of the system changes completely. this phenomenon can be completely understood in terms of the divergences , or poles , of eq .[ matrixp ] in the wdl. basically there are three different dynamical regimes of interest for excitation transfer : 1 ) _ delocalized _ or _ resonant _ state regime , where the maximum of ldps of the le - np falls within the passband and most of the excitation can be transferred to the np array ; 2 ) _ localized _ state regime where there is a very sharp peak in the ldps outside the passband and most of the excitation remains close to the excitation point ; and 3 ) a transition regime called _ virtual _ state where the ldps presents a non - lorentzian asymmetric peak just at the passband edge and most of the excitation can be transferred to the waveguide .the transitions between the different dynamical regimes can be obtained analytically in the wdl and assuming frequency independent couplings , where: gives the _ resonant - virtual _ transition and : gives the _ virtual - localized _ transition , where , , and .the constant is the relative coupling between le - np and first np of the chain in units of the coupling in the chain , yields the difference in resonance frequency of the le - np with respect to that of a chain s np , and measures the strength of the coupling between chain s nps in units of their own resonance frequency .these formulas , although strictly valid only in the wdl , give an excellent estimation for finites . the excitation transferred from the le - np to the _m_-th np of the chain can be calculated exactly ( in the quasi - static limit and within the nearest - neighbors approximation ) by using the formula: -\alpha \pi ( \omega ) \right ) } e^{-m ( l \pm iq ) } , \label{pm}\ ] ] where is the electric field on the le - np and the renormalized eigen - frequency is given by .the decay length and the wavenumber depend on the self energy as : where is : - \label{pi } \\ &\mathrm{sgn}(\omega ^{2}-\omega _ { _ { \mathrm{sp}}}^{2})\tfrac{1}{2}\sqrt{\left [ \omega ^{2}-\widetilde{\omega } _ { _ { \mathrm{sp}}}^{2}\right ] ^{2}-4\omega _ { _ { \mathrm{x}}}^{4}}. \notag\end{aligned}\ ] ] in all the plots of the next section with use as an indicator of the excitation transferred to the waveguide , with , , and using .we have chosen , because around this value the behavior of vs becomes essentially a small decaying exponential for all within the passband or zero for excitation frequencies outside it .thus , choosing a bigger value of would only add a multiplicative factor to the results , _i.e. _ it would not change qualitatively the figures shown .in this work , our purpose is to show how to use the sudden change in the dynamical properties of systems tuned around their dpt to device etpss . for this purpose, we used the system depicted in [ figure-1 ] as an example .[ figure-2 ] shows the maximum excitation transferred to a np of the waveguide , enabled by a variation of the excitation frequency , at each system configuration .superposed are the critical values that separates dynamically distinct behaviors calculated in the wdl ( eqs .[ alfac1 ] and [ alfac2 ] ) .as it has been previously reported, excitation transfer is controlled by the dpt where _ virtual _ states are transformed into _localized _ states ( gray dashed lines ) .{figure-3a.eps } \\ \includegraphics[width=2.5in]{figure-3b.eps}\end{array} ] etps could also be used for sensing other properties , not only as plasmon nanorulers . asshown in [ figure-2 ] , the system is also very sensitive to the value of the square frequency offset ( sfo = ) and consequently to , , and/or .increasing the numerator of the sfo or decreasing its denominator , will both move the sfo away from the center of [ figure-2 ] as indicated by the horizontal green arrow .[ figure-3]-*b * ) shows the excitation transfer spectrum for three values of sfo along this green arrow .similarly to the nanoruler case , the spectrum experiences a significant change when moving the sfo along a dpt , from to . in order to show how to make profit of this feature we will consider two examples : one in which contraction or expansion of the whole system is measured , and one in which local dielectric constant around the le - np is measured .[ figure-4]-*a * shows , for a fixed excitation frequency , , the relative increment of the excitation transferred , , as a function of the relative expansion / contraction of the whole system , which turns our device into an optical strain monitor or even into a temperature sensor , depending on the expansion coefficient of the host material .note that a relative expansion / contraction of the whole system corresponds to a change in but not in , , nor in .thus it is equivalent to an horizontal displacement in [ figure-2 ] .[ figure-4]-*b * shows the variation of the relative increment of the excitation transferred as a function of the local dielectric constant around the le - np , a property potentially useful for molecular sensing purposes for example. in this case , a variation of changes not only but also and ( eqs . [ rii]-[omegax ] ) .therefore , it is equivalent to a diagonal displacement in [ figure-2 ] and a rescaling of the external applied field .the key ingredients from which the phenomenology described so far arise are : dominant nearest - neighbors interactions and the semi - infinite character of the system .if this conditions are fulfilled , the results presented above should be applicable to the system under consideration besides quantitative corrections such as shifts of the spectra or the exact position of dpts in the parameter s phase space .however , it is important to understand how our results are corrected by different factors . in this section, we will study the effect of interactions beyond nearest - neighbors and retardation effects . in this case , eq .[ pvector ] is still valid but all coupling between nps should be considered , _i.e. _ is no longer tridiagonal , and these couplings should be determined by the dipole - induced electric field beyond the quasi - static limit , _i.e. _ : \left ( 1- ikd \right ) \right\},\ ] ] where is the wavenumber in the dielectric , ( where is the speed of light in the medium ) , is the unit vector in the direction of ( where is the position of the observation point with respect to the position of the dipole ) . as the system consists of a linear array of nps where the spheroids axes are aligned with respect to the direction of the array , transversal ( ) and longitudinal ( ) excitations do not mix .thus , the coupling terms of eq .[ omegax ] acquire the form : where ^ { ikd_{i , j } } \notag \\ & \widetilde{\gamma } ^{_{t}}_{i , j}=[1-ikd_{i , j}-(kd_{i , j})^ { 2 } ] e^ { ikd_{i , j } } .\end{aligned}\ ] ] eq . [ matrixp ] has now to be solved numerically which can be done by using standard methods of linear algebra . the physical system considered in this section corresponds to a linear array of very flat oblate spheroidal nps of silver ( , and / s ) with the radio of the minor axis equals to nm and the radii of the major axis equal to nm ( shape factor , ) .the major to minor axis ratio is about 10 which is large enough to ensure a strong quadrupole quenching . separation between nps of the chain is nm , from center to center , and they are aligned with the two equal axes perpendicular to the array direction . the external electric field is applied only to the first np where the direction of is also perpendicular to the array , _i.e. _ .electronic damping is calculated using the matthiessen s rule where the fermi velocity is m/s , the bulk mean free path is , we use , and the effective mean free path is calculated using where is the volume and the surface of the spheroids. for simplicity we consider a dielectric medium with . taking into account this , and give 0.032 and 0.41 ( for ) respectively .we use here a slightly smaller coupling than that of the previous section for reasons that will become clear in the context of [ figure-5 ] . as before ,the observation point is fixed at the 8th np , and we use the value of as an indicator of the excitation transferred to the chain . due to the large interparticle coupling , finite size effects result so important , especially near the lowest band edge , that it is necessary to take very long size of chains .we use 200 nps , a value large enough to ensure negligible finite size effects . in arbitrary units , as function of for , and .different curves show numerical results within the near field approximation for : nearest - neighbor interactions , fifth - nearest - neighbor interactions , and a fully converged calculation .the system is at the dpt for the curve with nearest - neighbor interactions.,width=240 ] [ figure-5 ] shows the effect of including interaction beyond nearest - neighbors .the first effect of this is essentially a red - shift and a broadening of the passband .the second effect is that the dpt is also shifted from to as can be seen in [ figure-6]-a .however , it is still present and indeed , as the damping term is proportional to the frequency , the peak is increased .note that if were used , the peak would disappear from the spectrum as it would be located at , showing an important effect of higher - order contributions to interactions . in [ figure-6]-a and b retardation effects on the dpt are compared . here, retardation causes nothing else but an increase of the intensity of the peak , in the region of the spectrum where the dpt is observable .this is because , in this region of the spectrum , around , is much smaller than ( ) .of course at higher frequencies , for example at , this is not the case ( ) and the behavior of the system is completely beyond the near field approximation .{figure-6a.eps } \\ \includegraphics[width=2.5in]{figure-6b.eps}\end{array}$ ] in arbitrary units , as function of its position .retardation effects as well as all interactions between nps are considered . here and the excitation frequency is , which correspond to the peak of [ figure-6]-b , width=240 ] finally , [ figure-7 ] shows the value of for different nps of the chain .note that even after including all contribution to the interaction among nps and retardation effects , still decays exponentially as predicted by the theory , although the decay rate is different mainly because the passband edge is considerably shifted .in this work we have studied the potentiality for sensing purposes of the excitation transferred from a le - np to the interior of a semi - infinite np chain , when the system is very close to its dpt condition .while most plasmonic sensors use the shift of the lsp resonance peaks or the photoluminescence quenching of hybrid plasmonic - fluorophores systems , here we introduce a new working principle .basically , the idea is to take advantage of the abrupt change in the plasmonic energy transferred when a control parameter is slightly changed around a dpt .we have shown that this kind of sensor has the unique characteristic of having an _ on - off _ switching property and a high sensitivity , which opens new possibilities to design plasmonic devices such as plasmonic circuits activated only under certain environmental conditions .we have also addressed the effects of different corrections to the approximations used and have shown that if the system s parameters are chosen properly , the dpt should be observable and therefore useful for the sensing purposes .99 stewart , m. e. ; anderton , c. r. ; thompson , l. b. ; maria , j. ; gray , s. k. ; rogers , j. a. ; nuzzo , r. g. _ chem .rev . _ * 2008 * , 108 , 494 - 521 .mayer , k. m. ; hafner , j. h. _ chem .rev . _ * 2011 * , 111 , 3828 - 57 .halas , n. j. ; lal , s. ; chang , w .- s . ; link , s. ; nordlander , p. _ chem .rev . _ * 2011 * , 111 , 3913 - 61 .perassi , e. m. ; canali , l. r. ; coronado , e. a. _ j. phys .c _ * 2009 * , 113 , 6315 - 6319 .encina , e. r. ; perassi , e. m. ; coronado , e. a. _ j. phys .* 2009 * , 113 , 4489 - 4497 .coronado , e. a. ; encina , e. r. ; stefani , f. d. _ nanoscale _ * 2011 * , 3 , 4042 - 59 .encina , e. r. ; coronado , e. a. _ j. phys .c _ * 2010 * , 114 , 3918 - 3923 .rong , g. ; wang , h. ; reinhard , b. m. _ nano letters _ * 2010 * , 10 , 230 - 8 .jain , p. k. ; huang , w. ; el - sayed , m. a. _ nano letters _ * 2007 * , 7 , 2080 - 2088 .jain , p. k. ; el - sayed , m. a. _ phys .c _ * 2008 * , 112 , 4954 - 4960 .tabor , c. ; murali , r. ; mahmoud , m. ; el - sayed , m. a , _ j. phys .* 2009 * , 113 , 1946 - 53 .chergui , m. ; melikyan , a. ; minassian , h. _ j. phys .* 2009 * , 113 , 6463 - 6471 .funston , a. m. ; novo , c. ; davis , t. j. ; mulvaney , p. _ nano letters _ 2009 , 9 , 1651 - 1658 .huang , c .- ping ; yin , x .- gang ; kong , l .- bao ; zhu , y .- yuan _ j. phys .c _ * 2010 * , 114 , 21123 - 21131 .liu , c. ; li , b. q. _ j. phys .* 2011 * , 115 , 5323 - 5333 .ben , x. ; park , h. s. _ j. phys .c _ * 2011 * , 115 , 15915 - 15926 .haldar , k. k. ; sen , t. ; patra , a. _ j. phys .* 2010 * , 114 , 4869 - 4874 .ray , p. c. ; fortner , a. ; darbha , g. k. _ j. phys .* 2006 * , 110 , 20745 - 8 .sen , t. ; haldar , k. k. ; patra , a. j. _ phys .c _ * 2008 * , 112 , 17945 - 17951 .griffin , j. ; ray , p. c. _ j. phys .b _ * 2008 * , 112 , 11198 - 201 .seelig , j. ; leslie , k. ; renn , a. ; kuhn , s.;jacobsen , v.;van de corput , m. ; wyman , c. ; sandoghdar , v. _ nano letters _ * 2007 * , 7 , 685 - 689 .singh , m. p. ; strouse , g. f. _ j. am .soc . _ * 2010 * , 132 , 9383 - 91 .anker , j. n. ; hall , w. p. ; lyandres , o. ; shah , n. c. ; zhao , j. ; duyne , r. p. van _ nature materials _ * 2008 * , 7 , 442 - 53 .hodges , m. d. ; kelly , j. g. ; bentley , a. j. ; fogarty , s. ; patel , i. i. ; martin , f. l. ; fullwood , n. j. _ acs nano _ * 2011 * , 5 , 9535 - 41 .ando , j. ; fujita , k. ; smith , n. i. ; kawata , s. _ nano letters _ * 2011 * , 11 , 5344 - 8 .baca , a. j. ; montgomery , j. m. ; cambrea , l. r. ; moran , m. ; johnson , l. ; yacoub , j. ; truong , t. t. _ j. phys .c _ * 2011 * , 115 , 7171 - 7178 .maier , s. a. _ plasmonics : fundamentals and applications _ ; springer press : new york , 2007 .novotny , l. ; hecht , b. _ principles of nano - optics _ ; cambridge press : cambridge , 2007 .burin , l. ; cao , h. ; schatz , g.c . ; ratner , m.a .b _ * 2004 * , 21 , 121 - 131 .guillon , m. _ opt .express _ * 2006 * , 14 , 3045 - 3055 .zou , s. ; schatz , g. c. _ nanotechnology _ * 2006 * , 17 2813 - 2820 .hernndez , j. v. ; noordam , l.d . ; bobicheaux , f. _ j. phys .b _ * 2005 * , 109 , 15808 - 15811 . backes , t. d. ; citrin , d. s. _ phys . rev .b _ * 2008 * , 78 , 153407 .gozman , m. ; polishchuk , i. ; burin , a. _ phys .* 2008 * , 372 , 5250 - 5253 .maier , s.;kik , p. ; atwater , h. ; meltzer , s. ; harel , e. ; koel , b. ; requicha , a. _ nature mat . _* 2003 * , 2 , 229 - 232 .citrin , d. s. _ nano letters _ * 2004 * , 4 , 1561 - 1565 .alu , a. ; engheta , n. _ phys .* 2006 * , 74 , 205436 .malyshev , a. v. ; malyshev , v. a. ; knoester , j. _ nano letters _ * 2008 * , 8 , 2369 - 2372 .zou , s. ; schatz , g. c. _ j. chem .phys . _ * 2004 * , 121 , 12606 - 12 .abajo , f. j. _ rev .phys . _ * 2007 * , 79 , 1267 - 1290 .markel , v. ; sarychev , _b _ * 2007 * , 75 , 085426. park , s. y. ; stroud , d. _ phys .b _ * 2004 * , 69 , 125418 .gharghi , m. ; gladden , c. ; zentgraf , t. ; liu , y. ; yin , x. ; valentine , j. ; zhang , x. _ nano letters _ * 2011 * , 11 , 2825 - 8 .guo , x. ; qiu , m. ; bao , j. ; wiley , b. j. ; yang , q. ; zhang , x. ; ma , y. ; yu , h. ; tong , l. _ nano letters _ * 2009 * , 9 , 4515 - 9 .brongersma , m. l. ; hartman , j. w. ; atwater , h. a. _ phys .b _ * 2000 * , 62 , r16356-r16359 .bustos - marn , r. a. ; coronado , e. a. ; pastawski , h. m. _ phys .b _ * 2010 * , 82 , 035434 .kottos , t. _ nature physics _ * 2010 * , 6 , 166 - 167 .bender , c. m. ; boettcher , s. _ phys .lett . _ * 1998 * , 80 , 5243 - 5246 .dente , a. d. ; bustos - marn , r. a. ; pastawski , h. m. _ phys .* 2008 * , 78 , 062116 .rotter , i. _ j. phys .a : mathematical and theoretical _ * 2009 * , 42 , 153001 ; garmon , s. ; ribeiro , p. ; and mosseri , r. _ phys .e _ * 2011 * , 83 , 23 .jones , r. _ phys . rev . _ * 1945 * , 68 , 93 - 96 .pastawski , h. m. ; medina , e. _ rev .. de fis . _* 2001 * , 47 s1 , 1 - 23 ; and .references therein .we used a complex implementation of lu decomposition method .see : press , w. h. ; teukolsky , s. a. ; vetterling , w. t. ; flannery , b. p. _ numerical recipes in fortran 77 : the art of scientific computing _ ; cambridge university press : cambridge , 1998 ; vol .kelly , k. l. ; coronado , e. a. ; zhao , l. l. ; schatz , g. c. _ the journal of physical chemistry b _ * 2003 * , 107 , 668 - 677 .coronado , e. a. ; schatz , g. c. _ j. chem .* 2003 * , 119 , 3926 - 3934 . | dynamical phase transitions ( dpts ) describe the abrupt change in the dynamical properties of open systems when a single control parameter is slightly modified . recently we found that this phenomenon is also present in a simple model of a linear array of metallic nanoparticles ( nps ) in the form of a localized - delocalized dpt . in this work we show how to take advantage of dpts in order to design a new kind of plasmonic sensor which should own some unique characteristics . for example , if it were used as plasmon ruler it would not follow the so called universal plasmon ruler equation [ _ nano letters _ * 2007 * , 7 , 2080 - 2088 ] , exhibiting instead an _ on - off _ switching feature . this basically means that a signal should only be observed when the control / measured parameter , _ i.e. _ a distance in the case of plasmon rulers , has a very precise and pre - determined value . here , we demonstrate their feasibility and unique characteristics , showing that they combine high sensitivity with this _ on - off _ switching feature in terms of different distances and local dielectric constants . this property has the potentiality to be used in the design of new plasmonic devices such as plasmonic circuits activated only under certain environmental conditions . keywords : plasmonics , one dimensional arrays , open systems , localization , plasmon rulers . |
the multiple descriptions ( md ) problem has been studied extensively since late 1970s , see and the references therein . in a md setup, the encoder sends packets ( descriptions ) which are sent to the receiver over different channels . in the most general setting, it is assumed that the decoder receives a subset of the descriptions without any error and the remaining are completely lost .the decoder reconstructs the source upto a given level of distortion when a subset of the descriptions are received .the goal of the md problem is to establish the complete rate - distortion region to trade - off the encoding rates to the achievable distortions .the general setup has remained challenging and unsolved due to the intricacies of the problem in maintaining the balance between the full reconstruction quality versus quality of individual descriptions . until recently , for general sources and distortion measures , the most recognized achievable rate - distortion region for the md setup was due to venkataramani , kramer and goyal ( vkg ) , whose encoding scheme builds on the prior work for the 2-channel case by el - gamal and cover ( ec ) and zhang and berger ( zb ) .the vkg scheme involves a combinatorial number of refinement codebooks along with a single shared codebook used to control the redundancy across the descriptions .we introduced a new encoding scheme in involving ` combinatorial message sharing ' ( cms ) which differs from the vkg scheme primarily in the number of shared codebooks .the cms scheme allows for every subset of the descriptions to share a different common codebook , thereby leading to a combinatorial number of shared messages . at the time of submission of , it was not known whether the cms scheme leads to a strictly improved rate - distortion region over the vkg scheme . in this paper ,our objective is to prove by example that the new region is indeed strictly better .specifically , we show that for a binary symmetric source under hamming distortion measure , the cms scheme achieves points outside the vkg region .in fact , more generally , our result holds for any source and distortion measures for which the zb scheme achives points outside the ec scheme for the corresponding 2-descriptions problem .we note in passing that , other encoding schemes have been proposed in the literature for certain special cases ( specific sources and distortion measures ) of the md setup , which achieve points outside .however , none of these schemes have been proven to subsume or outperform for general sources and distortion measures .the potential implications of our results on these coding schemes are beyond the scope of this paper . in the following section , we formally state the md setup and describe the prior results due to ec , zb , vkg and the cms scheme . in section [ sec :proof - of - strict ] , we prove the strict improvement of the achievable region .we follow the notation in .a source produces iid copies , denoted by , of a generic random variable taking values in a finite alphabet .we denote .there are encoding functions , , which map to the descriptions , where for some .the rate of description is defined as .each of the descriptions are sent over a separate channel and are either received at the decoder error free or are completely lost .there are decoding functions for each possible received combination of the descriptions , , where takes on values on a finite set , and denotes the null set . when a subset of the descriptions are received at the decoder , the distortion is measured as $ ] for some bounded distortion measures defined as .we say that a rate - distortion tuple is achievable if there exit encoding functions with rates and decoding functions yielding distortions .the closure of the set of all achievable rate - distortion tuples is defined as the ` _ _ -channel multiple descriptions rd region _ _ ' .note that , this region has dimensions . in what follows , denotes the set of all subsets ( power set ) of any set and denotes the set cardinality .note that . denotes the set complement . for two sets and , we denote the set difference by .we use the shorthand for and . is a set of variables , whereas is a single variable . ] . )indicates ` common random variable ' .red ( ) indicates ` base layer random variables ' and white ( ) indicates ` refinement random variables ' .the arrow indicates the order of codebook generation.[fig : vkg ] ] the achievable region of is denoted here and is described as follows .let be any set of random variables distributed jointly with .then , an rd tuple is said to be achievable if there exist functions such that:\label{eq : vkg}\end{aligned}\ ] ] .the closure of the achievable tuples over all such random variables gives .here , we only present an overview of the encoding scheme . the order of codebook generation of the auxiliary random variables is shown in figure [ fig : vkg ] .first , codewords of are generated using the marginal distribution of .conditioned on each codeword of , codewords of are generated according to their respective conditional densities .next , for each , a single codeword is generated for conditioned on .note that to generate the codebook for , we first need the codebooks for all and . on observing a typical sequence , the encoder tries to find a jointly typical codeword tuple one from each codebook .codeword index of ( at rate ) is sent in description .along with the ` _ _ private _ _ ' messages , each description also carries a ` _ _ shared message _ _ ' at rate , which is the codeword index of . hence the rate of each description is .vkg showed that , to ensure finding a set of jointly typical codewords with the observed sequence , the rates must satisfy ( [ eq : vkg_rate ] ) .it then follows from standard arguments ( see for example `` typical average lemma '' ) that , if the random variables also satisfy ( [ eq : vkg ] ) , then the distortion constraints are met .note that , is the _ only _ shared random variable . form the base layer random variables and all form the refinement layers .observe that the codebook generation follows the order : shared layer base layer refinement layer .the vkg scheme for the 2-descriptions scenario involves 4 auxiliary random variables and .the vkg region was originally derived as an extension of the ec and zb coding schemes , which were designed for the 2-descriptions scenario .the first of the two regions was by el - gamal and cover and their rate region ( denoted here by ) is obtained by setting in , where is a constant .zhang and berger ( their region is denoted here by ) later showed that , including the shared random variable can give strict improvement over .their result , while perhaps counter - intuitive at first , clarifies the fact that , a shared message among the descriptions helps to better coordinate the messages , thereby providing a strictly improved rd region , even though it introduces redundancy .we will describe their result in detail in section [ sec : proof - of - strict ] , as our example builds upon theirs . however , it is known that is complete for some special cases of the setup ( see for example ) . ] in this section , we briefly describe our cms encoding scheme in .the vkg encoding scheme employs _one _ common codeword ( ) that is sent in all the descriptions .however , when dealing with descriptions , restricting to a single shared message could be suboptimal .the cms scheme therefore allows for ` combinatorial message sharing ' , i.e a common codeword is sent in each ( non - empty ) subset of the descriptions . before describing the codebook generation and stating the theorem , we define the following subsets of : let be any non - empty subset of with . we define the following subsets of and : we also define to mean is subsumed in and to mean strictly subsumed.]: the shared random variables are denoted by ` ' .the base and the refinement layer random variables are denoted by ` ' .the codebook generation is done in an order as shown in figure [ fig : l_channel_cms ] .first , the codebook for is generated .then , the codebooks for , are generated in the order . codewords of are independently generated conditioned on each codeword tuple of .this is followed by the generation of the base layer codebooks , i.e. , .conditioned on each codeword tuple of , codewords of are generated independently . then the codebooks for the refinement layers are formed by generating a single codeword for conditioned on every codeword tuple of .observe that the base and the refinement layers in the cms scheme are similar to that in the vkg scheme , except that they are now generated conditioned on a subset of the shared codewords .the encoder employs joint typicality encoding , i.e. , on observing a typical sequence , it tries to find a jointly typical codeword tuple , one from each codebook .as with the vkg scheme , the codeword index of ( at rate ) is sent in description .however , now the codeword index of ( at rate ) is sent in _ all _ the descriptions .therefore the rate of description is: we next state the main result in which describes a new region for the md setup achievable by the cms scheme .let be any set of random variables jointly distributed with .we define the quantities and as follows : we follow the convention .let and be any set of rate tuples satisfying: then , the rd region for the md problem contains the rates and distortions for which there exist functions , such that \label{eq : dist_condition_thm}\end{aligned}\ ] ] the closure of the achievable tuples over all such random variables is denoted by .observe that both the vkg and the cms schemes are same as the zb scheme for 2 descriptions scenario .note that , the total number of auxiliary random variables in the cms scheme is almost twice that in the vkg scheme ( which already is exponential in ) . at the time of submission of , it was yet unclear if this increase pays off with an improved achievable region . the following theorem , being the main contribution of this paper , establishes that there exists scenarios for which is strictly larger than .\(i ) the rate - distortion region achievable by the cms scheme is always at least as large as the region achievable by the vkg region , i.e.: ( ii ) there exists scenarios for which the cms scheme leads to a region strictly larger than that achievable by the vkg scheme , i.e.: specifically , for a binary symmetric source under hamming distortion measure , the cms scheme achieves a strictly larger rate - distortion region compared to the vkg scheme .part ( i ) of the theorem is rather simple to prove and is a straight forward corollary of the main theorem in .it follows directly by setting such that in .we then have . substituting in ( [ eq : rate_condition_thm ] ), we get which is same as ( [ eq : vkg ] ) .we prove ( ii ) by considering the binary symmetric source example for which the cms scheme achieves points which can not be achieved by the vkg scheme .note that , once we prove that the cms scheme achieves a strictly larger region for some , then it must be true for all .hence to prove ( ii ) , it is sufficient for us to show that it is true for .however , we first include an example for for building intuition and understanding of the type of scenarios where the cms scheme provides strict improvement. then we will prove the result for .we also note that obviously scenarios exit for which ( for example when is complete ) .finding the set of all such scenarios is an interesting problem in itself and is beyond the scope of this paper . to describe our example ,we require certain results pertaining to binary multiple descriptions and successive refinement of binary sources . in what follows , we state these results . *the zhang - berger example * : zhang and berger proved that , for the binary symmetric 2-descriptions md problem under hamming distortion measure , sending a common codeword in both the descriptions provides a strict improvement over the ec scheme .we briefly describe their result .note that the rate - distortion region has 5 dimensions denoted by ) .denote the rate - distortion region achievable by the ec scheme by and the corresponding region achievable by the zb scheme ( i.e. achieved by adding a common codeword among the two descriptions ) by .obviously , as we can always choose not to send any common codeword in the zb scheme .denote by the following cross section of : similarly , denote by , the corresponding cross section of . to show that , they considered a particular joint probability mass function ( pmf ) us denote the achievable region associated with this pmf by and the corresponding cross - section ( [ eq : ec_crosssection ] ) by .they showed that such that .we refer the reader to for a detailed derivation and the values of and .* successive refinement * : the problem of successive refinement was first proposed by equitz and cover in and has since then been studied extensively by information theorists .the problem is motivated by scalable coding , where the encoder generates two layers of information called the base layer and the enhancement layer .the base layer provides a coarse reconstruction of the source , while the enhancement layer is used to ` refine ' the reconstruction beyond the base layer .the objective is to encode the two layers such that the distortion at both the base and the enhancement layers are optimal .this setup is shown schematically in figure [ fig : successive - refinement - setup ] .observe that , the 2-layer successive refinement region is indeed a special case ( the cross - section ) of the 2-descriptions md setup where the distortion constraint on one of the individual descriptions is removed .the complete rate region for successive refinement was derived in where it was shown that the ec coding scheme achieves the complete rate region .an interesting followup question is that of ` _ _ successive refinability _ _ ' of sources .assume , then a source is said to be successively refinable under if , the rate point is achievable , where denotes shannon s rate distortion function .this condition implies that there is no loss in describing the source in two successive parts . an important point to noteis that , for a successively refinable source , when the encoder operates at , there is _ absolutely no redundancy _ between the two layers of information , i.e. , the _ two layers can not carry a common codeword_. we finally note that a binary symmetric source is successively refinable under hamming measure . * proof of ( ii ) : * : consider a 4-descriptions md problem for a binary symmetric source ( ) under hamming distortion measure .the rate - distortion region consists of 19 dimensions .we denote the region achievable by the vkg scheme by and that achievable using the cms scheme by .we now consider a particular cross - section of these regions where we apply constraints only on and .we remove the constraints on all other distortions , i.e. we set and to .equivalently , we can think of a 4 descriptions md problem with a particular channel failure pattern , wherein only one of the following sets of descriptions can reach the decoder reliably : as shown in figure [ fig : example - to - demonstrate ] .we denote the set of all achievable points for this setup using the vkg and the cms schemes by and respectively .note that , this equivalent model is used simply for analysis purposes , while we are actually interested in a cross section of the general binary symmetric 4-descriptions region .observe that , with respect to the first 2 descriptions , we have a simple 2-descriptions problem and with respect to the last 2 descriptions , we have a successive refinement problem . extending the arguments of zhang and berger , we define the following infimum of : denote the corresponding infimum of by .recall that the vkg scheme forces all the descriptions to have a single common codeword .constraints and ensure that descriptions 3 and 4 carry completely complementary information , i.e. they _ can not _ carry a common codeword .is in fact redundant .just the constraint is sufficient to establish that descriptions 3 and 4 can not carry a common codeword . however , as a binary source is successively refinable , we can always achieve and hence the constraint gets applied implicitly once we apply .this implies that , the gains due to the cms scheme are not only restricted to successively refinable sources .in fact , the cms scheme can achieve points outside the vkg region for any source and distortion measure for which the zb scheme achieves points outside the ec scheme for the corresponding 2-descriptions setup . ] as vkg coding scheme forces the _ same common codeword _among all the 4 descriptions , it follows that: on the other hand , the cms scheme allows for distinct common codewords to be sent in each subset of the descriptions .hence , we can send a common codeword only among the two descriptions 1 and 2 while still maintaining .this is achieved by setting all the common random variables to except , which has joint pmf with .this allows us to achieve : this implies that and hence .this example clearly illustrates the freedom the cms scheme exhibits in controlling the redundancy across the messages . * * : in similar lines to the 4-descriptions case , we next consider a 3-descriptions md problem for a binary symmetric source under hamming distortion measure .let the achievable regions be denote by and respectively .we consider the cross - sections of the achievable regions where we apply constraints only on and as shown in figure [ fig:3d_eg ] .we denote these cross - sections by and respectively .now consider any point such that and , where for two sets and , . from the results of zhang and berger , if , descriptions 1 and 2 _ must _ carry a common codeword .let the rate of the common codeword be .vkg scheme forces this codeword to be sent as part of as well . as this common codewordis received as part of both descriptions 1 and 3 , it is redundant in to achieve .this implies that .as there exit points in the boundary of which satisfy , the cms scheme achieves points outside the vkg scheme .hence , we have shown that for a binary symmetric source under hamming distortion measure , the cms scheme achieves a strictly larger rate - distortion region than the vkg scheme for all .we recently proposed a new encoding scheme for the general multiple descriptions problem involving ` _ _ combinatorial message sharing _ _ ' ( cms ) which leads to a new achievable region subsuming the most well known region for this problem by venkataramani , kramer and goyal ( vkg ) for general sources and distortion measures . in this paper, we showed that there exists scenarios ( particularly for a binary symmetric source under hamming distortion measure ) for which , the new region is strictly larger than that achievable by the vkg scheme .as part of future work , we will investigate under what scenarios the cms scheme achieves the complete rd region .j. wang , j. chen , l. zhao , p. cuff , and h. permuter , a random variable substitution lemma with applications to multiple description coding , preprint .[ online ] .available : http://arxiv.org/abs/0909.3135 .k. viswanatha , e. akyol and k. rose , `` combinatorial message sharing for a refined multiple descriptions achievable region , '' to appear at the proceedings of ieee international symposium on information theory ( isit ) 2011 .submitted version available at : http://www.scl.ece.ucsb.edu/kumar/isit_md_sub.pdf | we recently proposed a new coding scheme for the l - channel multiple descriptions ( md ) problem for general sources and distortion measures involving ` combinatorial message sharing ' ( cms ) leading to a new achievable rate - distortion region . our objective in this paper is to establish that this coding scheme strictly subsumes the most popular region for this problem due to venkataramani , kramer and goyal ( vkg ) . in particular , we show that for a binary symmetric source under hamming distortion measure , the cms scheme provides a strictly larger region for all l . the principle of the cms coding scheme is to include a common message in every subset of the descriptions , unlike the vkg scheme which sends a single common message in all the descriptions . in essence , we show that allowing for a common codeword in every subset of descriptions provides better freedom in coordinating the messages which can be exploited constructively to achieve points outside the vkg region . multiple descriptions coding , source coding , rate distortion theory |
the level of reported confidence intervals are most often 95% , with equal probability of missing the target at both sides . sometimes other levels are used , but rarely are several intervals at their different levels reported in applied work . instead of only reporting one confidence interval we suggest to report a family of nested confidence intervals for parameters of primary interest .the family is indexed by the confidence level for and is conveniently represented by what is called a _ confidence curve _ , a quantity introduced by to give a complete picture of the estimation uncertainty . as an example , take for known .it yields the curve for the cumulative distribution function of a .this is a confidence curve since , for all , is the respective confidence interval of level . in the examplethe confidence curve has its minimum at which is a point estimate of .the normal confidence curve is tail - symmetric , i.e. the probability of missing the parameter to the left equals that to the right and is at level .a tail - symmetric confidence curve represents uniquely a confidence distribution , that is confidence curves that describe upper confidence limits .confidence distribution is a term coined by and formally defined in . for scalar parameters the fiducial distributions developed by are confidence distributions . saw that the fiducial distribution leads to confidence intervals . sees confidence distributions as `` simple and interpretable summaries of what can reasonably be learned from the data ( and an assumed model ) '' .confidence distributions are reviewed by , and more broadly and with more emphasis on confidence curves by . in location models and other simple modelsthe confidence distribution is obtained from pivots , e.g. the normal pivot in the above example .a canonical pivot is where is the distribution function of the maximum likelihood estimator , assumed to be absolutely continuous with respect to the lebesgue measure and non - increasing in .see section [ section:2 ] for precise definitions and notation .the confidence distribution is a canonical pivot in the sense of being uniformly distributed on the unit interval when is distributed according to .when is a sufficient statistic with monotone likelihood ratio , is also optimal in the neyman - pearson sense , that is it describes smaller confidence intervals at a given level when compared to any other confidence distribution for the parameter ( * ? ? ?* section 5.4 ) .an equal tailedconfidence curve is readily obtained from by . in this paperwe shall be concerned with confidence curves obtained from the log - likelihood ratio , and we shall study the properties of median bias correction .median bias correction of a confidence curve , proposed by , is a method to make the resulting confidence curve approximately tail symmetric . in the normal example andthe confidence curve mentioned above is also given by where is the cumulative chi - square distribution function with one degree of freedom .this confidence curve is tail - symmetric , as mentioned , and the confidence interval of level is the single point which thus has median and is said to be median unbiased .in general hits zero at the maximum likelihood estimator , which might not be median unbiased .let have median .the median bias corrected confidence curve is the confidence curve of the parameter .the idea is to probability transform the bias corrected log likelihood ratio rather than . with denoting the sampling distribution of when the data is distributed according to , the bias corrected confidence curve is . since is median unbiased , the level set at is typically the single point and , by continuity , is close to be equal tailedat low levels .we undertake a theoretical study of the asymptotic properties of by showing that is third - order tail - symmetric for in the normal deviation range in two important classes of parametric models with parameter dimension one .first , we consider parametric models that belong to the efron s normal transformation family .then , we extend the result to regular one dimensional exponential families , where we also discuss the relation between median bias corrected and modified directed likelihood of , thus providing an alternative approximation to the latter . since median bias correction works so well in these cases , it is reasonable to expect the method to work well quite generally .however , when a canonical confidence distribution is available , as in the exponential family models , we do of course not advocate to use median bias correction rather than using the canonical confidence distribution .the rest of the paper is organized as follows . in section [ section:2 ], we recast confidence estimation in terms of confidence curves and introduce the notation we use in the sequel .we also define the confidence curve based on inverting the median bias corrected version of the log - likelihood ratio . in section [ section:3 ] and [ section:4 ], we investigate its asymptotic properties in terms of tail symmetry in the efron s normal transformation family and in one dimensional exponential families , respectively .finally , in section [ section:5 ] some concluding remarks and lines of future research are presented , together with an example that provides a preliminary illustration of the use of median bias correction in the presence of nuisance parameters . some proofs and a technical lemmaare deferred to the appendix .let be a continuous random sample with density depending on a real parameter and let indicate probabilities calculated under .the log - likelihood is , and the log - likelihood ratio is , where is the maximum likelihood estimate .we drop the second argument in sample - dependent functions like and whenever it is clear from the context whether we refer to a random quantity or to its observed value . unless otherwise specified , all asymptotic approximations are for and stochastic term refers to convergence in probability with respect to .we assume that the model is sufficiently regular for the validity of first order asymptotic theory , cfr .* chapter 3 ) .in particular , converges in distribution to a chi - squared random variable , hence , by contouring with respect to this distribution we obtain intervals of values given by the level sets for the curve where is the distribution function of the chi - squared distribution with 1 degree of freedom .this curve depends on the sample and has its minimum at .however its level sets are not in general exact confidence intervals since the chi - squared approximation for the distribution of is valid only for large and the coverage probabilities equal the nominal levels only in the limit . as a consequence , is not uniformly distributed on the unit interval under , a property we require for a _regular _ confidence curve as spelled in the following definition .[ definition:1 ] a function is a regular confidence curve when , the level sets are finite intervals for all , and under .confidence curves might be defined for parameters of higher dimension and also for irregular curves that even might have more than one local minimum or might have infinite level sets for , see ( * ? ? ?* section 4.6 ) .note that , under definition [ definition:1 ] , is an exact confidence region of level since . among confidence curves , of special importanceare confidence distributions , which are confidence curves that describe upper confidence limits .the definition is as follows .[ definition:1b ] a function is a confidence distribution when is a cumulative distribution function in for all and under .keep in mind that the realized confidence curve and confidence distribution depend on the data , and prior to observation they are random variables ( with distribution depending on the parameter value from which the data are generated ) . to keep the notation simple, we drop the second argument in and .moreover , we will confine ourselves to regular confidence curves with only one local minimum . in this setting be transformed into a distribution via so that the left and right endpoints of the interval , are given by respectively .we refer to as the median confidence estimator for . by construction , and .we then say that is tail symmetric when the interval is equal tailed , that is this is equivalent to defining a confidence distribution according to definition [ definition:1b ] .[ definition:2 ] a confidence curve is tail symmetric if in is a confidence distribution according to definition [ definition:1b ] .the relation obviously works in the other direction : given a confidence distribution , defines a tail - symmetric confidence curve , see .note that the median confidence estimator of a tail symmetric is median - unbiased , i.e. . see ( * ? ? ?* section 5.6 ) .the relation between median - unbiased estimators and equal tailedintervals have been noted by in connection with the maximum likelihood estimator .we now focus on confidence distributions derived from the likelihood .it is convenient to set as the exact confidence distribution the one obtained from the sampling distribution of the maximum likelihood estimator , namely where we assume that , the distribution function of , is continuous and non - increasing in . in order to have being a proper cumulative distribution function, it is also required that and , where and are the infimum and supremum of the parameter space , respectively .the -quantile is denoted by .in particular , corresponds to the median unbiased estimator of .the exact distribution is generally unknown and the asymptotic approximation of the confidence limit has been object of an extensive research which goes beyond first order accuracy .see for a review .third order approximations to , and thus to , can be obtained from the modified directed likelihood of , see section 4.2 for a discussion .we will instead look for a route to such good approximations by transforming the scale at which the log - likelihood ratio is presented . to this aim ,let be the sampling distribution function of under , and define according to , and are the endpoints of a confidence interval of level .it is clear that , in general , is not tail symmetric according to definition [ definition:2 ] , in particular when is not median unbiased .more generally , the distribution estimator is not uniformly distributed on the unit interval under . according to first - order asymptotics, is tail symmetric up to the first order of approximation , that is consequently and and . in order to improve on ( [ eq : first - asymp2 ] ) , we consider the median bias correction to .let be the median of as function of , that is by assumption , is continuously increasing in and , as a simple calculation reveals .the _ median bias corrected log - likelihood ratio _ is defined as by construction , attains its minimum at , the median unbiased estimator of . since both the likelihood function and the median function are invariant to monotone parameter transformations , invariance is preserved for .see for a different type of likelihood correction , aimed at reducing the bias of the maximum likelihood estimator .the median bias corrected confidence curve is defined as where stands for the sampling distribution of under . according to, it yields the distribution estimator for illustration , we consider confidence distributions for the variance parameter in the normal model . for ,the log - likelihood ratio is .based on , one finds and ( in obvious notation ) . using the pivotal distribution of , and be computed via monte carlo .based on a simulated sample of size with , the left panel of figure [ fig3 ] displays according to while the right panel reports , and , according to , and , respectively . in the normal model for observations generated according to .left panel : together with some of its confidence intervals .right panel : ( dashed line ) , ( solid line ) , and nearly on top ( dotted line ) . , and are based on 50000 monte carlo simulations . __ ] note that and are on top of each other and are almost indistinguishable .hence , the median correction in , by making the median confidence estimator of coincide with , shifts the whole curve towards , thus inducing nearly exact tail symmetry .we return to this example in section [ section:4 ] where we give a theoretical justification to the fact and coincide to the third order of approximation in theorem [ theorem:2 ] .we conclude this section by noting that , while can be interpreted as the log - likelihood ratio for the parameter , that is , does not correspond to the confidence curve in the -parametrization , that is where stands now for the sampling distribution of the log - likelihood ratio in terms of the parameter . as an example , consider the exponential model , . by standard calculationone finds that , and .hence , for , we get while , where and . on the other hand , shares with the property of invariance with respect to monotone transformation of the parameter : if for invertible , then , it is easy to see that bias corrected confidence curve in the , say , corresponds to .this can be easily verified in the exponential model above by taking , e.g. , so that represents the mean parameter . in the sequel , for ease of notation, we avoid superscripts as in and whenever the parametrization the likelihood is referring to will be clear from the context .in this section we establish third order tail symmetry of the bias corrected confidence curve when is a sufficient statistic and belongs to the normal transformation family of .this family of distributions was used by to introduce bias and acceleration corrected bootstrapped confidence intervals that achieve second order accuracy .the idea is that standard intervals are based on assuming that the normal approximation of is exact , with a fixed constant and , hence , convergence to normality can be improved by considering a monotone transformation of and which is exactly normalizing and variance stabilizing .second order accuracy was later extended to regular statistical models such as the exponential family , see .we follow a similar path here , as we first prove , in theorem [ theorem:1 ] , tail symmetry in the normal transformation family as this case provides a simple illustration of the generalized inverse mapping argument reported in lemma [ lemma:1 ] of the appendix .theorem [ theorem:2 ] of section [ section:4 ] addresses tail symmetry in the exponential family , where an additional cornish fisher expansion of the distribution of the maximum likelihood estimator is needed .theorem [ theorem:2 ] is indeed a more general result than theorem [ theorem:1 ] since , by pitman - koopman - darmois theorem , cfr .* theorem 6.18 ) , if the data are independent and identically distributed and the dimension of the sufficient statistic does not depend on , as we are assuming here , then the model is an exponential family . let be a sufficient estimator for , not necessarily maximizing the likelihood , but behaving asymptotically like the maximum likelihood estimator in terms of order of magnitude of its bias , standard deviation , skewness , and kurtosis : where , and are functions of and ( the latter suppressed in the notation ) bounded in . see equations ( 5.1)(5.3 ) in .next , suppose there exists a monotone increasing transformation and constants ( bias constant ) and ( acceleration constant ) such that and satisfy where when and when .model has standard deviation linear in on the transformed scale .it provides a pivot with accompanying confidence distribution .the latter is directly transformed back to a confidence distribution for , that is .theorem [ theorem:1 ] states that as well as , are third order tail - symmetric according to definition [ definition:2 ] , an improvement up to in the asymptotic order displayed in .the proof relies on the asymptotic inversion of convex functions reported in lemma [ lemma:1 ] in the appendix .[ theorem:1 ] let be a sufficient estimator of based on a sample of size satisfying , and assume there exists a monotone increasing function such that holds .then , for and defined in and , respectively , since a confidence curve for translates into one for for the invertible transformation , it is sufficient to prove in the transformed normal model . under , the normalizing transformation is locally linear in its argument with a scale factor of order . in particular , from , the normal deviation range in corresponds to according to ( * ? ? ?* theorem 2 ) , ] is not maximized at , unless , rather at \ ] ] as a simple calculation reveals .one finds that and , consequently , .actually , belongs to the normal transformation family since it can be written as for , see ( * ? ? ?* section 11 ) , with distribution and median function .note that is increasing in when , which we assume without loss of generality since it certainly is for large .since is a sufficient statistic , the log - likelihood ratio for is it is easy to check that is convex in both arguments , and so is its bias corrected version .let be defined according to .we are interested in expressing in terms of tail probabilities associated to for comparison with the confidence distribution . to this aim ,let be implicitly defined in function of and by .then , for when , for when .we only consider the first case , where the equality of interest is hence , for , the normal deviation range in corresponds to for in . as for the right hand side of , fromit follows that , when , , so that is implied by for in . in order to establish, we derive an asymptotic expansion of locally at by an application of the generalized inverse mapping argument of lemma [ lemma:1 ] .let \big) ] . also , let be implicitly defined by so that /[1+ab(\phi)] ] and for , so that the hypotheses of lemma [ lemma:1 ] are satisfied .hence , for , that is for /[1+ab(\phi)]=o_p(1) ] and both and are , we get for /(1+a\phi)=o_p(1) ] , so that ] , and can be calculated by monte - carlo . in the left panel of figure [ normal_transformation ]we plot and for and . even for a non -negligible acceleration ( later we argue that , so it roughly corresponds to ) , the median corrected confidence curve nearly exactly recovers , through , the confidence distribution .the right panel shows that the difference between the two confidence distributions is very small , approximately of order , suggesting that the order of magnitude in might be conservative . , .left panel : confidence distributions ( solid line ) , and nearly on top ( dotted line ) .right panel : difference . is based on 100000 monte carlo simulations . _ ]_ _ 4.1 tail symmetry.__ in this section we establish third order tail symmetry for the mean value parameter of regular one - parameter exponential families . following ( * ? ? ?* section 5 ) , let ] . upon defining and ,the log - likelihood for is .since the cumulant generating function for is the -th order cumulant of is , the -th order derivative of .we set , so that and . consequently , is the standard error of , where we use the subscript in to highlight the dependence on .note that since .the following result can be stated .[ theorem:2 ] let and be the maximum likelihood estimator and the log - likelihood ratio for the mean value parameter in a continuous one - dimensional exponential model based on a random sample of size .also , let be the standard error of and and be defined in and , respectively .then , as , the proof is deferred to the appendix and we only provide here in this paragraph a sketch .reasoning as in the proof of theorem [ theorem:1 ] , take so that /2 ] .exact inference on can be based on the conditional distribution of given , which depends on only through .see , and who find the conditional confidence distribution to be uniformly most powerful .the definition of and are to be interpreted conditionally on as well .we expect the median bias corrected confidence curve based on the profile likelihood to be tail - symmetric to the third order , and to the second order to be chi - square distributed .the investigation of the relation of the bias corrected profile likelihood with other versions of adjusted profile likelihoods that have been proposed in the literature would also be of interest . outside the exponential family ,the evaluation of sample space derivatives of the likelihood requires the identification of an ancillary statistic .moreover , the distribution of the maximum likelihood estimator has to be evaluated conditionally upon this statistic .the asymptotic approximations used in theorem [ theorem:2 ] can be adapted to this setting , a natural extension being for transformation families .next is a preliminary illustration of the use of median bias correction to confidence curves in a multidimensional statistical model .the model in the example below is not in the exponential family , nor an ancillary statistic is available , and we there use brute force to handle the nuisance parameter . __ example.__ we consider the `` bolt from heaven '' data example from section 7.4 in .data consists of winning times in the fastest -m races from 2000 to 2007 , that is races that clocked at seconds or better . translate these races results as in order to apply extreme value statistics . specifically , the data is modeled using the generalized pareto distribution ( gpd ) which has density for .sections 3.4 and 6.5 in .interest is in estimating for and .it takes on the interpretation of the probability , as seen at the start of 2008 , that in the fastest races of 2008 one should experience a race of or better , where is the world record time scored by usain bolt on 31 may 2008 .see for details .the authors compute a confidence curve for the parameter by profiling the log - likelihood , and by inverting the profile log - likelihood ratio with respect to the chi - squared distribution after bartlett correction , where ( found through simulations ) and is the chi - squared distribution function with degree of freedom . by construction, points at according to maximum likelihood estimates and ( with approximate standard errors in parentheses ) and has 90% confidence interval ] .let according to .it is easy to check that the first three sample derivatives of are ] .based on , we have /\sigma_\theta = ( \theta^{**}-\theta)/\sigma_\theta + \rho_3/6n^{1/2}+o(n^{-3/2}) ] for .hence , +o(n^{-3/2})\ ] ] for .next , use the edgeworth expansion for up to the first term , i.e. and a taylor expansion of at for to get substitution into leads to an asymptotic expansion of which corresponds to .hence , follows and the proof is complete .theorem [ theorem:2 ] in order to prove , we proceed by deriving two asymptotic expansions for and and by showing that they coincide up to the required order . as for , we resort to equation ( 2.4)(2.6 ) in . after some algebra and further expansion , so that where we have also used for small . as for ,a taylor expansion around gives with denoting the remainder .in the one - parameter exponential family , borrowing the notation from the proof of theorem [ theorem:2 ] , we have since .moreover , in the proof of theorem [ theorem:2 ] implies that since , and .inserting and into we obtain the same expansion in provided that the remainder is .this can be shown by using , , and in the normal deviation range , together with .hence follows .[ lemma:1 ] let be a sequence of infinitely differentiable convex functions with minimum at and , and let be defined by . for , assume that , as , for any .then , admits asymptotic expansion where , and we omit the subscript for ease of notation .taylor expansion of at gives .substitute into and equate coefficients of successive order to obtain where the s are positive integers and we set for notational convenience . rearranging terms , the first equations are a similar expression for can be given by means of multinomial coefficients . now substitute back , and solve for to get , and where the order of asymptotics of and are determined by the hypothesis .an argument by induction leads to .the authors are grateful to two reviewers for comments that have helped to improve the paper substantially .special thanks are also due to igor prnster and to mattia ciollaro for comments on an earlier version of this work .p. de blasi was supported by the european research council ( erc ) through stg `` n - bnp '' 306406 .barndorff - nielsen , o.e .( 1983 ) . on a formula for the distribution of the maximum likelihood estimator ._ biometrika _ * 70 * , 343365 .barndorff - nielsen , o.e .inference on full and partial parameters based on the standardized signed log likelihood ratio ._ biometrika _ * 73 * , 307322 .barndorff - nielsen , o.e .approximate interval probabilities . _ j. r. stat ._ b * 52 * , 485496 .barndorff - nielsen , o.e . andcox , d.r .( 1989 ) . _ asymptotic techniques for use in statistics_. chapman & hall , london .barndorff - nielsen , o.e . andcox , d.r ._ inference and asymptotics_. chapman & hall , london .birnbaum , a. ( 1961 ) .confidence curves : an omnibus technique for estimation and testing statistical hypothesis ._ j. amer .statist ._ * 56 * , 246249 .cox , d. r. ( 1958 ) . some problems with statistical inference ._ the annals of mathematical statistics _ , * 29 * , 357372 .efron , b. ( 1982 ) .transformation theory : how normal is a family of distributions ? _ ann .statist . _* 10 * , 323339 .efron , b. ( 1987 ) .better bootstrap confidence intervals ._ j. amer .statist .assoc . _ * 82 * , 171185 .embrechts , p. , klppelberg , c. and mikosch t. ( 1997 ) . _ modelling extremal events for insurance and finance_. springer - verlag berlin heidelberg .firth , d. ( 1993 ) .bias reduction of maximum likelihood estimates ._ biometrika _ * 80 * , 2738 .fisher , r.a .inverse probability .cambridge philos .* 26 * , 52835 .lehmann , e.l ._ testing statistical hypothesis , 2ed_. springer - verlag , new york .lehmann , e.l . andcasella , g. ( 1998 ) ._ theory of point estimation , 2ed_. springer - verlag , new york .neyman , j. ( 1934 ) .on the two different aspects of the representative method : the method of stratified sampling and the method of purposive selection _ j. r. stat ._ a * 97 * , 558625 .schweder , t. ( 2007 ) .confidence nets for curves . in _ advances in statistical modeling and inference , essay in honour of kjell a. doksum _ , v. nair ed .world scientific , 593609 .schweder , t. and hjort , n.l .confidence and likelihood ._ scand .j. statist . _* 29 * , 309322 .schweder , t. and hjort , n. l. ( 2016 ) ._ confidence , likelihood , probability : statistical inference with confidence distributions_. cambridge university press .skovgaard , i.m .( 1989 ) . a review of higher order likelihood inference .inst . _ * 53 * , 331351 .xie , m. and singh , k. ( 2013 ) .confidence distribution , the frequentist distribution estimator of a parameter a review ._ international statistical review _ * 81 * , 3 - 39 . | by the modified directed likelihood , higher order accurate confidence limits for a scalar parameter are obtained from the likelihood . they are conveniently described in terms of a confidence distribution , that is a sample dependent distribution function on the parameter space . in this paper we explore a different route to accurate confidence limits via tail symmetric confidence curves , that is curves that describe equal tailedintervals at any level . instead of modifying the directed likelihood , we consider inversion of the log - likelihood ratio when evaluated at the median of the maximum likelihood estimator . this is shown to provide equal tailedintervals , and thus an exact confidence distribution , to the third - order of approximation in regular one - dimensional models . median bias correction also provides an alternative approximation to the modified directed likelihood which holds up to the second order in exponential families . * keywords : * asymptotic expansion ; confidence curve ; confidence distribution ; exponential family ; modified directed likelihood ; normal transformation family . |
in population dynamics and population genetics a prominent role is played by diffusion processes on or ^g ] if the left - hand side of is strictly smaller than one and an upper bound for the growth rate of ] if or of the form if .the infinitesimal mean and the infinitesimal variance are locally lipschitz continuous in and satisfy and if , then .the function is strictly positive on and the function is globally upward lipschitz continuous , that is , whenever where is a finite constant .furthermore satisfies the growth condition for all where is a finite constant .note that zero is a trap for , that is , implies for all .if we need the solution of to be well - defined , then we additionally assume the migration matrix to be substochastic .[ a : migration ] the set is ( at most ) countable and the matrix is nonnegative and substochastic , i.e. , and for all .note that assumption [ a : a1 ] together with assumption [ a : migration ] guarantees existence and uniqueness of a solution of with values in .this follows from proposition 2.1 and inequality ( 48 ) of by letting for and using monotone convergence .mass emigrates from each island at rate one and colonizes new islands .a new population should evolve as the process and should start from a single individual which has mass zero due to the diffusion approximation .thus we need the law of excursions of from the trap zero .for this , define the set of excursions from zero by , \,\chi_t=0{\ensuremath{\;\;\forall\;}}t\in(-\infty,0]\cup[t_0,\infty)\bigr\}\ ] ] where is the first hitting time of .the set is furnished with locally uniform convergence . for existence of the excursion measure and in order to apply the results of , we need to assume additional properties of and of . for the motivation of these assumptions , we refer the reader to .assume for some . then the scale function defined through is well - defined .[ a : hutzenthaler2009ejp ] the functions and satisfy for some . under assumption [ a : hutzenthaler2009ejp ] , the process hits zero in finite time almost surely and the expected total emigration intensity of the virgin island model is finite , see lemma 9.5 and lemma 9.6 in . moreover the scale function is well - defined and satisfies . a generic example which satisfies assumption [ a : hutzenthaler2009ejp ] is , where , and . assumption [ a : hutzenthaler2009ejp ] is not met by . assuming [ a : a1 ] and [ a : hutzenthaler2009ejp ] , we now define the excursion measure as the law of started in and rescaled with as .more formally , it is stated in pitman and yor ( 1982 ) ( see for a proof ) that there exists a unique -finite measure on such that }}}=\int f(\chi)q(d\chi)\ ] ] for every bounded continuous for which there exists a such that whenever . the reader might want to think of as describing the evolution of a population founded by a single individual . in the special case , with and , the process is feller s branching diffusion whose law is infinitely divisible .then the excursion measure coincides with the canonical measure .next we introduce the state space of the virgin island model .the construction of the virgin island model is generation by generation .as -th generation , we denote the set of all islands which are non - empty at time zero or which are colonized by individuals immigrating into the system .all islands being colonized by emigrants from the -th generation are denoted as islands of the first generation .the second generation is colonized from the first generation and so on .for this generation - wise construction , we use a method to index islands which keeps track of which island has been colonized from which island .an island is identified with a triple which indicates its mother island , the time of its colonization and the population size on the island as a function of time .let be the set of all possible islands of the -th generation .for each , define which we will refer to as the set of all possible islands of the -th generation .this notation should be read as follows .island has been colonized from island at time and carries total mass at time .denote by the union of all over .the virgin island model will have values in the set of subset of .having introduced the excursion measure , we now construct the virgin island model with constant immigration rate and started in .let be independent solutions of such that almost surely .moreover let be a poisson point process on with intensity measure } } } = { { \ensuremath{\theta}}}\,dt\otimes q(d\psi).\ ] ] the elements of the poisson point process are the islands whose founders immigrated into the system . next we construct all islands which are colonized from a given mother island .let be a set of independent poisson point processes on with intensity measure } } } = \chi_{t - s}\,dt\otimes q(d\psi)\quad \iota\in{\ensuremath{{{\ensuremath{\mathcal i}}}}}\cup\{\emptyset\}.\ ] ] the elements of the poisson point process are the islands which descend from the mother island .all ingredients are assumed to be independent .now the -th generation is the -st generation , , is the ( random ) set of all islands which have been colonized from islands of the -th generation and the virgin island model is the union of all generations we call the virgin island model with immigration rate and initial configuration .we begin with convergence of the -island process . in this convergence, we allow the drift function and the diffusion function to depend on in order to include the case of weak immigration .for example , one could be interested in an -island model with logistic branching and weak immigration at rate on each island . in that case , one would set and for .the equation of the -island process now reads as }}}dt + \sqrt { \sigma_n^2{{{\bigl(x_t^n(i)\bigr)}}}}\,db_t(i ) \end { split}\ ] ] where and where , , are independent standard brownian motions .the idea to include weak immigration into a convergence result is due to dawson and greven ( 2010 ) who independently obtain convergence of an -island model using different methods .define for . for the -process to converge ,we need assumptions on and on the initial distribution .[ a : a1_n_linear ] define .the functions are locally lipschitz continuous on .the sequence converges pointwise to as .in addition , as and for all .the diffusion functions and are linear , that is , and for some constants .furthermore converges to as . assumptions [ a : a1 ] and [ a : hutzenthaler2009ejp ] hold for .moreover is uniformly upward lipschitz continuous in zero , that is , for all , and some constant .here is an example . if and , then assumption [ a : a1_n_linear ] is satisfied if and .[ a : initial ] the random variables and are defined on the same probability space for each .there exists a random permutation of for each such that }}}=0.\ ] ] furthermore the total mass of has finite expectation .if is a summable sequence , then assumption [ a : initial ] is satisfied for and , .next we introduce the topology for the weak convergence of the -island process .what will be relevant here is not any specific numbering of the islands but the statistics ( or `` spectrum '' ) of their population sizes , described by the sum of dirac measures at each time point , that is , where is the dirac measure on .the state space of the measure - valued process is the set of finite measures on .we equip the state space with the vague topology on .now we formulate the convergence of the -process defined in .[ thm : convergence ] suppose that and satisfy assumption [ a : a1_n_linear ] and that the initial configurations and satisfy assumption [ a : initial ] .then , for every , we have that in distribution where is the virgin island model with immigration rate and initial configuration . the proof is deferred to section [ sec : convergence_to_vim ] .[ r : convergence ] for readability we rewrite the convergence in terms of test functions .the weak convergence is equivalent to } } } = { { \ensuremath{\mathbbm{e}}}}{{{\biggl [ f{{{\biggl(\bigl(\sum_{(\iota , s,\eta)\in{{\ensuremath{\mathcal v } } } } f{{{\bigl(\eta_{t - s}\bigr)}}}\bigr)_{t\leq t}\biggr)}}}\biggr]}}}.\ ] ] for every bounded continuous function on ,{{\ensuremath{\mathbbm{r}}}}\bigr)}}} ] satisfying the lipschitz condition ,{{\ensuremath{\mathbbm{r}}}}\bigr)}}}\ ] ] for some .in addition let be a continuous function satisfying for all . then following the arguments in the proof of lemma [ l : vanishing_immigration_weak_process ] below, one can show that holds with and replaced by and , respectively .the assumptions of theorem [ thm : convergence ] are satisfied for branching diffusions with local population regulation .a prominent example is the -island model with logistic drift and with for and some constants .more generally , theorem [ thm : convergence ] can be applied if and for where is a concave function with .we believe that theorem [ thm : convergence ] also holds for non - linear infinitesimal variances such as in case of the wright - fisher diffusion .our proof requires linearity only for one argument which is the step from equation to equation . in case of logistic branching ,we obtain a noteworthy duality of the total mass process of the virgin island model with the mean field model defined in . by theorem 3 of , systems of interacting feller branching diffusions with logistic drift satisfy a duality relation which for the -process reads as where the notation refers to the initial configuration and refers to .this duality is established in via a generator calculation , in swart ( 2006 ) via dualities between lloyd - sudbury particle models and in by following ancestral lineages of forward and backward processes in a graphical representation .now let in .then the left - hand side converges to the laplace transform of the total mass process of the virgin island model ( without immigration ) and the right - hand side converges to the laplace transform of the mean field model .this proves the following corollary .[ c : duality ] let be the total mass process of the virgin island model without immigration starting on only one island .furthermore let be the solution of , both with coefficients and for where .then where and refer to and , respectively . together with known results on the mean field model, this corollary leads to a computable expression for the extinction probability of the virgin island model .let be as in corollary [ c : duality ] .then converges to a random variable in distribution as .if then . if condition fails to hold , then where refers to .the parameter is the unique solution of and the probability distribution is defined by on where is a normalizing constant .theorem 2 of shows convergence in distribution of to as and if holds .if fails to hold , then corollary [ c : duality ] together with convergence of implies convergence in distribution of to a variable as .the distribution of is necessarily an invariant distribution of the mean field model and is nontrivial .lemma 5.1 of shows that there is exactly one nontrivial invariant distribution for and this distribution is given by .the second main result is a comparison of systems of locally regulated diffusions with the virgin island model . for its formulation ,we introduce three stochastic orders which are inspired by cox et al ( 1996 ) .let and be two stochastic processes with state space .we say that is dominated by with respect to a set of test functions on path space if }}}\leq { { \ensuremath{\mathbbm{e}}}}{{{\bigl [ f({{\ensuremath{\tilde{z}}}})\bigr ] } } } \quad{\ensuremath{\;\;\forall\;}}f\in{{\ensuremath{\mathbbm{f}}}}.\ ] ] the first order is the usual stochastic order in which is dominated by if there is a coupling of and in which is dominated by for all almost surely . assuming path continuity , an equivalent condition is as follows .denote the set of non - decreasing test functions of arguments by for a set .furthermore let be the set of non - decreasing functions which depend on finitely many time - space points if there is no space component , then we simply write . in this notation, is equivalent to , see subsection 4.b.1 in shaked and shanthikumar ( 1994 ) .we will use two more stochastic orders . in the literature , the set of non - decreasing , convex functions is often used . herean adequate set is the collection of non - decreasing functions whose second order partial derivatives are nonnegative . as we do not want to assume smoothness , we slightly weaken the latter assumption .we say for that a function is _-convex _ if note that if is smooth , then this is equivalent to . in addition note that is -convex if and only if is convex in the -th component .moreover we say that is -concave if is -convex . a functionis called directionally convex ( e.g. shaked and shanthikumar 1990 ) if it is -convex for all .such functions are also referred to as l - superadditive functions ( e.g. rueschendorf 1983 ) .define the set of increasing , directionally convex functions as and similarly with -convex replaced by -concave. furthermore define and as in with replaced by and , respectively .now we have introduced three stochastic orders , and .note that contains all mixed monomials and that contains all functions with .[ thm : comparison ] assume [ a : a1 ] , [ a : migration ] and [ a : hutzenthaler2009ejp ] .if is concave and if is superadditive , then we have that if is concave and is subadditive , then inequality holds with replaced by . if is subadditive and is additive , then inequality holds with replaced by . the proof is deferred to section [ sec : comparison_with_the_vim ] .comparisons of diffusions at fixed time points are well - known .theorem [ thm : comparison ] provides an inequality for the whole process .the techniques we develope for this in subsection [ ssec : preservation of convexity ] might allow to generalize the comparison results of bergenthum and rueschendorf ( 2007 ) on semimartingales . the assumption of being subadditive is natural in the following sense .let us assume that letting two -island processes with initial masses and , respectively , evolve independently is better in expectation for the total mass than letting one -island process with initial mass evolve .this assumption implies that for all and thus subadditivity of the infinitesimal mean .if is not additive , then we need the stronger assumption of being concave for lemma [ l : preservation ] . from theorem [ thm : comparison ] and a global extinction result for the virgin island model, we obtain a condition for global extinction of systems of locally regulated diffusions . according to theorem 2 of , the total mass of the virgin island model converges in distribution to zero as if and only if condition below is satisfied .together with theorem 2 , this proves the following corollary .[ cor : global_extinction ] assume [ a : a1 ] , [ a : migration ] and [ a : hutzenthaler2009ejp ] .suppose that is subadditive and is additive or that is concave and is either superadditive or subadditive .then implies global extinction of the solution of , that is , as whenever almost surely . in case of logistic branching ( , ) , condition simplifies to condition .for a second example , let be a stepping stone model with selection and mutation , e.g. ] .therefore lvy scharacterization ( e.g. theorem iv.33.1 in ) implies that defines a standard brownian motion .moreover it follows from summing over that solves .pathwise uniqueness of has been established in proposition 2.1 of . for the following lemmas ,let be a solution of and let be the solution of . define the stopping times through for every and every .[ l : first_moment_estimate_x_z ] assume [ a : a1_n ] .then , for every , there exists a constant such that } } } \leq c_t{{{\bigl(2{{\ensuremath{\theta}}}+\sum_{i=1}^n x_i^n\bigr)}}}\ ] ] for every initial configuration and every . fix and . by assumption [ a : a1_n ]we have that for all .according to lemma [ l : decomposition ] , defined through is a solution of .sum over , stop at time and take expectations to obtain that } } } \leq\sum_{i=1}^n x^n(i ) + \int_0^t l { { \ensuremath{\mathbbm{e}}}}{{{\biggl[\sum_{i=1}^n { { \ensuremath{\tilde{x}}}}_{s\wedge{{\ensuremath{\tilde{\tau}}}}_k^n}^n(i)\biggr]}}}+2{{\ensuremath{\theta}}}\,ds\ ] ] for every and .note that the right - hand side is finite .now gronwall s inequality implies that } } } \leq { { { \bigl(\sum_{i=1}^n x^n(i)+2{{\ensuremath{\theta}}}t\bigr ) } } } e^{l t}\ ] ] for all and . letting fatou s lemma yields that } } } & = \sup_{t\leq t}{{\ensuremath{\mathbbm{e}}}}{{{\biggl[\sum_{i=1}^n \liminf_{k\to\infty } { { \ensuremath{\tilde{x}}}}_{t\wedge{{\ensuremath{\tilde{\tau}}}}_k^n}^n(i)\biggr]}}}\\ & \leq \sup_{t\leq t}\liminf_{k\to\infty}{{\ensuremath{\mathbbm{e}}}}{{{\biggl[\sum_{i=1}^n { { \ensuremath{\tilde{x}}}}_{t\wedge{{\ensuremath{\tilde{\tau}}}}_k^n}^n(i)\biggr ] } } } \leq { { { \bigl(\sum_{i=1}^n x^n(i)+2{{\ensuremath{\theta}}}t\bigr ) } } } e^{l t}. \end{split}\ ] ]this proves inequality with .the inequality for the loop - free -island process follows similarly .[ l : essentially_finitely_many_generations ] assume [ a : initial ] and [ a : a1_n ] .then we have that for every .we prove inequality for the solution of .the estimate for the solution of is analogous .recall for all .apply it s formula to , take expectations , estimate for all and take suprema to obtain that for all and all .note that the right - hand side is finite due to lemma [ l : first_moment_estimate_x_z ] and assumption [ a : initial ] .summing over and applying gronwall s inequality implies that for every .letting proves .[ l : second_moment_estimate_x_z ] assume [ a : a1_n ] and [ a : second_moments ] .then we have that }}}<\infty\ ] ] is finite for every .the analogous assertion holds for the loop - free -process .recall from and fix for the moment . according to lemma [l : decomposition ] , is an -island model .recall from assumption [ a : a1_n ] that for all .thus lemma 3.3 of implies that the -process is dominated by the -process .using it s formula and for all , we get that } } } = { { \ensuremath{\mathbbm{e}}}}{{{\biggl[\bigl(\sum_{i=1}^n { { { \ensuremath{\bar{x}}}}_{0}^{n}(i ) } \bigr)^2 \biggr ] } } } } \\ & + \int_0^t 2l_\mu { { \ensuremath{\mathbbm{e}}}}{{{\biggl[\bigl(\sum_{i=1}^n { { { \ensuremath{\bar{x}}}}_{s\wedge{{\ensuremath{\tilde{\tau}}}}_k^n}^{n}(i ) } \bigr)^2 \biggr ] } } } + 4{{\ensuremath{\theta}}}{{\ensuremath{\mathbbm{e}}}}{{{\bigl[\sum_{i=1}^n { { \ensuremath{\bar{x}}}}_{s\wedge{{\ensuremath{\tilde{\tau}}}}_k^n}^n(i)\bigr ] } } } + { { \ensuremath{\mathbbm{e}}}}{{{\bigl[\sum_{i=1}^n \sigma_n^2{{{\bigl({{\ensuremath{\bar{x}}}}_{s\wedge{{\ensuremath{\tilde{\tau}}}}_k^n}^n(i)\bigr)}}}\bigr]}}}\,ds\\ & \leq { { \ensuremath{\mathbbm{e}}}}{{{\biggl[\bigl(\sum_{i=1}^n { { { \ensuremath{\bar{x}}}}_{0}^{n}(i ) } \bigr)^2 \biggr ] } } } + \int_0^t ( 2 l_\mu+l_\sigma ) { { \ensuremath{\mathbbm{e}}}}{{{\biggl[\bigl(\sum_{i=1}^n { { { \ensuremath{\bar{x}}}}_{s\wedge{{\ensuremath{\tilde{\tau}}}}_k^n}^{n}(i ) } \bigr)^2 \biggr ] } } } + ( 4{{\ensuremath{\theta}}}+l_\sigma)c_t\,ds \end{split}\ ] ] for every , every and some constant .we used lemma [ l : first_moment_estimate_x_z ] for the last inequality .note that the right - hand side is finite .applying doob s submartingale inequality ( e.g. theorem ii.70.2 in ) to the submartingale , using fatou s lemma and applying gronwall s inequality to , we conclude that } } } & \leq 4\sup_{t\leq t } { { \ensuremath{\mathbbm{e}}}}{{{\bigl [ \bigl(\sum_{i=1}^n{{\ensuremath{\bar{x}}}}_t^{n}(i)\bigr)^2\bigr]}}}\\ & = 4\sup_{t\leq t } { { \ensuremath{\mathbbm{e}}}}{{{\bigl [ \bigl(\sum_{i=1}^n { { \ensuremath{{\displaystyle \liminf_{k { \rightarrow}\infty}}}}}{{\ensuremath{\bar{x}}}}_{t\wedge{{\ensuremath{\tilde{\tau}}}}_k^n}^{n}(i)\bigr)^2\bigr]}}}\\ & \leq 4\sup_{t\leq t } { { \ensuremath{{\displaystyle \liminf_{k { \rightarrow}\infty}}}}}\ , { { \ensuremath{\mathbbm{e}}}}{{{\bigl [ \bigl(\sum_{i=1}^n { { \ensuremath{\bar{x}}}}_{t\wedge{{\ensuremath{\tilde{\tau}}}}_k^n}^{n}(i)\bigr)^2\bigr]}}}\\ & \leq 4{{{\bigl[{{{\bigl(4{{\ensuremath{\theta}}}+l_\sigma\bigr)}}}c_t t+{{\ensuremath{\mathbbm{e}}}}\bigl(\sum_{i=1}^n x_0^n(i)\bigr)^2\bigr ] } } } e^{(2l_\mu+l_\sigma)t}. \end{split}\ ] ] the right - hand side is bounded uniformly in due to assumption [ a : second_moments ] .the proof in the case of the loop - free -island model is analogous .recall from .next we show that stopping at the time has no impact in the limit .[ l : tau_theta ] assume [ a : a1_n ] and [ a : second_moments ] .then any solution of satisfies that for every .the analogous assertion holds for the loop - free -process . rewriting , the assertion follows from the markov inequality and from the second moment estimate of lemma [ l : second_moment_estimate_x_z ] .next we prove some preliminary results for the solution of .[ l : second_moment_estimate_y ] assume [ a : a1_n ] .let be locally square lebesgue integrable function .then we have that } } } \leq c_t{{{\bigl[x+x^2+\int_s^t\ ! \frac{\zeta_n(r)}{n} + \bigl(\frac{\zeta_n(r)}{n}\bigr)^2\,dr\bigr]}}}\ ] ] for all , , and some constant which does not depend on , or on .the proof is similar to the proof of lemma [ l : second_moment_estimate_x_z ] , so we omit it . [l : y_2_supsum ] assume [ a : a1_n ] .let be locally square lebesgue integrable function .furthermore let , , be independent solutions of for every .then we have that } } } \leq c_t{{{\bigl[\int_s^t\ !\zeta_n(r ) + \frac{\bigl(\zeta_n(r)\bigr)^2}{n}\,dr\bigr]}}}\ ] ] for all , and some constant which does not depend on or .the proof is similar to the proof of lemma [ l : second_moment_estimate_x_z ] , so we omit it . [l : first_moment_estimate ] assume [ a : a1_n ] and fix .let and be two solutions of with respect to the same brownian motion such that and . if \to n{\ensuremath{{\displaystyle \cdot}}}i ] , .letting in and applying the dominated convergence theorem shows that }}}_0^{{\ensuremath{\delta}}}\end{split}\ ] ] which is equal to one .we recall the following lemma from , see lemma 9.8 there .[ l : finite_excursion_area ] assume [ a : a1 ] and [ a : hutzenthaler2009ejp ] . let be the excursion measure defined through . then the last result of this subsection is a variation of the second moment estimate of lemma [ l : second_moment_estimate_x_z ] .define the stopping times through for every and every .[ l : second_moment_estimate_x ] assume [ a : a1_n ] , [ a : hutzenthaler2009ejp ] and [ a : second_moments ] .then we have that }}}<\infty\ ] ] is finite for every and .lemma 3.3 in shows that , on the event , is stochastically bounded above by . by assumption [ a : a1_n ]we have that for all .together with the second moment estimate of lemma [ l : second_moment_estimate_y ] , this implies that }}}\\ & \leq \sum_{i=1}^n { { \ensuremath{\mathbbm{e}}}}{{{\biggl[{{\ensuremath{\mathbbm{e}}}}^{x_0^n(i)}{{{\biggl[\biggl(\sup_{t\leq t } y_{t,0}^{n , k+2\theta}\biggr)^2\biggr]}}}\biggr]}}}\\ & \leq c_t\sum_{i=1}^n { { { \bigl({{\ensuremath{\mathbbm{e}}}}{{{\bigl[x_0^n(i)\bigr]}}}+{{\ensuremath{\mathbbm{e}}}}{{{\bigl[{{{\bigl(x_0^n(i)\bigr)}}}^2\bigr ] } } } + t\frac{k+2\theta}{n}+t\frac{(k+2\theta)^2}{n^2}\bigr)}}}\\ & \leq c_t{{{\bigl(\sup_{m\in{{\ensuremath{\mathbbm{n}}}}}{{\ensuremath{\mathbbm{e}}}}{{\ensuremath{\bigl|x_0^m\bigr| } } } + \sup_{m\in{{\ensuremath{\mathbbm{n}}}}}{{\ensuremath{\mathbbm{e}}}}{{{\bigl[{{\ensuremath{\bigl|x_0^m\bigr|}}}^2\bigr]}}}+t ( k+2\theta)+t(k+2\theta)^2\bigr ) } } } \end{split}\ ] ] for every and some constant .the right - hand side is finite due to assumptions [ a : hutzenthaler2009ejp ] and [ a : second_moments ] . in this subsection , we prove which is the central step in the proof of theorem [ thm : convergence ] . our proof is based on reversing time in the stationary process . for the time reversal , we consider the following stationary situation .excursions from zero of the process start at times given by the points of an homogeneous poisson point process on with rate .this process of immigrating excursions is invariant for the dynamics of restricted to non - extinction . nowthe time reversal of an excursion is again governed by the excursion measure , see lemma [ l : time_reversal ] . asa consequence reversing time in the process of immigrating excursions does not change the distribution .let us retell the story more formally .consider a poisson point process on with intensity measure .then is the process of immigrating excursions .note that at a fixed time , is a poisson point process on with intensity measure where is the speed measure defined in .here we used that for ] .if the speed measure is replaced by the left - hand side of , then can be extended to allow for extinction .to show this we first formulate the markov property of the excursion measure . definition of as rescaled law of together with the markov property of implies that for all satisfying and every .[ l : time_reversal ] assume [ a : a1 ] and [ a : hutzenthaler2009ejp ] .then for all and all measurable functions .it suffices ( see e.g. theorem 14.12 in ) to establish for where and . if , then both sides of are infinite . for the rest of the proof , we assume , that is , for at least one .we may even assume for at least one .otherwise approximate monotonically from below with test functions which have compact support .in addition , we may without loss of generality assume .otherwise use a time translation .if vanishes on , then is essentially . to see this ,consider }\bigr ) } } } { { \ensuremath{\mathbbm{1}}}}_{y_{t_n}>0}\bigr)}}}m({d}x ) .\end{split}\ ] ] applying with , reversing the calculation in and substituting shows that we prove with replaced by by induction on .the base case follows from a time translation .the induction step follows directly from if . if , then for the second step we used linearity , applied the induction hypothesis and equation and again used linearity . adding and proves the induction step in case of .the remaining case follows from a similar calculation as in .this completes the proof of lemma [ l : time_reversal ] .[ l : speed_excursion_measure ] assume [ a : a1 ] and [ a : hutzenthaler2009ejp ] . then }\bigr)}}}m(dx )= \int \int_{-\infty}^t f{{{\bigl({{{(\eta_{t - t - s})}}}_{t\in[0,t]}\bigr)}}}{d}s\ , q({d}\eta)\ ] ] for all measurable functions \bigr)}}}\to[0,\infty) ] which depend on finitely many coordinates and which are globally lipschitz continuous in every coordinate for every .due to the lipschitz continuity and boundedness of , there exists a constant such that ,i\bigr)}}}.\ ] ] note that the set is closed under multiplication and separates points .thus the linear span of is an algebra which separates points .according to theorem 3.4.5 in the linear span of is distribution determining and so is .[ l : vanishing_immigration_general ] assume [ a : a1_n ] and [ a : hutzenthaler2009ejp ] .suppose that \to[0,\infty) ] are square lebesgue integrable and that in as .let satisfy .then for all and every function .let be such that satisfies .moreover let be bounded by .lemma [ l : first_moment_estimate ] implies that where is the finite constant from lemma [ l : first_moment_estimate ] .therefore , it suffices to prove with replaced by .we begin with the case of being a simple function . w.l.o.g .we consider where and as we may let depend trivially on further time points .the proof of is by induction on .the case has been settled in lemma [ l : vanishing_immigration ] .for the induction step we split up the left - hand side of into two terms according to whether the process at time is essentially zero or not . in order to formalize the notion `` essentially zero '' ,let be arbitrary and choose functions \bigr)}}} ] are square integrable and that as in the -norm .let , , be independent solutions of satisfying for every .then we have that where is a poisson point process on \times u ] satisfying the lipschitz condition ,{{\ensuremath{\mathbbm{r}}}}\bigr)}}}\ ] ] for some , some and some .in addition let be a continuous function satisfying for all .then we have that } } } = { { \ensuremath{\mathbbm{e}}}}{{{\biggl[{{\ensuremath{\bar{f}}}}{{{\biggl(\bigl(\int { { \ensuremath{\bar{f}}}}{{{\bigl(\eta_{t - u}\bigr)}}}\pi(du , d\eta)\bigr)_{s\leq t\leq t}\biggr)}}}\biggr]}}}. \end{split}\ ] ] let be a countable dense subset of .tightness of follows from tightness of this type of argument has been established in theorem 2.1 of roelly - coppoletta ( 1986 ) for the weak topology and . following the proof hereof, one can show the analogous argument for the vague topology and .fix and define for all and . note that is globally lipschitz continuous . for and fixed ,global lipschitz continuity of implies that } } } \leq \frac{l_f}{k } \sup_{m\in{{\ensuremath{\mathbbm{n}}}}}m{{\ensuremath{\mathbbm{e}}}}^0{{{\biggl[y_{t , s}^{m,{\ensuremath{\hat{\zeta}}}}(1)\biggr]}}}\ ] ] for some constant and for all .the right - hand side is finite according to lemma [ l : first_moment_estimate ] and converges to zero as .this proves tightness of , , for every . for the second part of the aldous criterion , fix and let , , be stopping times which are uniformly bounded by .in addition define ,n\in{{\ensuremath{\mathbbm{n}}}}.\ ] ] the functions , and are uniformly globally lipschitz continuous on the support of according to assumption [ a : a1_n ] .therefore there exists a constant such that and for all and all . for fixed and ] be a continuous function which satisfies for every ]. then .thus converges almost surely as for every .the second moment estimate of lemma [ l : y_2_supsum ] together with for all implies that the argument of in is almost surely uniformly bounded . consequently taking minimum with no effect for sufficiently large and with converges almost surely , too .uniform integrability of with follows from the -estimate }}}\\ & \leq 2{{\ensuremath{\bar{f}}}}{{{\bigl({{\ensuremath{\underline{0}}}}\bigr)}}}+ 2l_{{\ensuremath{\bar{f}}}}^2 l_{{\ensuremath{\bar{f}}}}^2{{\ensuremath{\mathbbm{e}}}}{{{\biggl[\biggl(\sum_{j=1}^n \sum_{i=1}^n y_{t_j , s}^{n,{\ensuremath{\hat{\zeta}}}}(i)\biggr)^2\biggr]}}}\\ & \leq 2{{\ensuremath{\bar{f}}}}{{{\bigl({{\ensuremath{\underline{0}}}}\bigr)}}}+ 2l_{{\ensuremath{\bar{f}}}}^2 l_{{\ensuremath{\bar{f}}}}^2 n^2{{\ensuremath{\mathbbm{e}}}}{{{\biggl[\sup_{s\leq t\leq t } \big(\sum_{i=1}^n y_{t , s}^{n,{\ensuremath{\hat{\zeta}}}}(i)\big)^2\biggr ] } } } \end{split}\ ] ] and lemma [ l : y_2_supsum ] .thus with converges also in expectation .the lipschitz condition implies that } } } -{{\ensuremath{\mathbbm{e}}}}{{{\biggl[{{\ensuremath{\bar{f}}}}{{{\biggl(\bigl(\sum_{i=1}^n { { { \bigl({{\ensuremath{\bar{f}}}}{\ensuremath{{\displaystyle \cdot}}}h_k\bigr ) } } } { { { \bigl(y_{t , s}^{n,{\ensuremath{\hat{\zeta}}}}(i)\bigr)}}}\bigr)_{s\leq t\leq t}\biggr)}}}\biggr ] } } } \biggr|\\ & \leq\sum_{j=1}^n l_{{\ensuremath{\bar{f}}}}l_{{\ensuremath{\bar{f}}}}{{\ensuremath{\mathbbm{e}}}}{{{\biggl[\sum_{i=1}^n y_{t_j , s}^{n,{\ensuremath{\hat{\zeta}}}}(i){{{\bigl(1-h_k{{{\bigl(y_{t_j , s}^{n,{\ensuremath{\hat{\zeta}}}}(i)\bigr)}}}\bigr)}}}\biggr]}}}\\ & \leq\sum_{j=1}^n l_{{\ensuremath{\bar{f}}}}l_{{\ensuremath{\bar{f}}}}{{\ensuremath{\mathbbm{e}}}}{{{\biggl[\sum_{i=1}^n y_{t_j , s}^{n,{\ensuremath{\hat{\zeta}}}}(i ) { { { \bigl({{\ensuremath{\mathbbm{1}}}}_{\sup_{s\leq t\leq t}{{\ensuremath{|y_{t , s}^{n,{\ensuremath{\hat{\zeta}}}}|}}}\geq k } + { { \ensuremath{\mathbbm{1}}}}_{y_{t_j , s}^{n,{\ensuremath{\hat{\zeta}}}}(i)\leq\frac{1}{k } } \bigr ) } } } \biggr]}}}\\ & \leq\frac{n l_{{\ensuremath{\bar{f}}}}l_{{\ensuremath{\bar{f}}}}}{k}{{\ensuremath{\mathbbm{e}}}}{{{\biggl[\sup_{s\leq t\leq t}{{\ensuremath{\bigl|y_{t , s}^{n,{\ensuremath{\hat{\zeta}}}}\bigr|}}}^2\biggr ] } } } + nl_{{\ensuremath{\bar{f}}}}l_{{\ensuremath{\bar{f}}}}\sup_{s\leq t\leq t}{{\ensuremath{\mathbbm{e}}}}{{{\bigl[\sum_{i=1}^n y_{t , s}^{n,{\ensuremath{\hat{\zeta}}}}(i)\wedge\frac{1}{k}\bigr ] } } } \end{split}\ ] ] for all . letting and then , the right - hand side converges to zero .thus } } } } \\ & = { { \ensuremath{{\displaystyle \lim_{k { \rightarrow}\infty}}}}}{{\ensuremath{{\displaystyle \lim_{n { \rightarrow}\infty}}}}}{{\ensuremath{\mathbbm{e}}}}{{{\biggl[{{\ensuremath{\bar{f}}}}{{{\biggl(\bigl(\sum_{i=1}^n { { { \bigl({{\ensuremath{\bar{f}}}}{\ensuremath{{\displaystyle \cdot}}}h_k\bigr ) } } } { { { \bigl(y_{t , s}(i)\bigr)}}}\bigr)_{s\leq t\leq t}\biggr)}}}\biggr]}}}\\ & = { { \ensuremath{{\displaystyle \lim_{k { \rightarrow}\infty}}}}}{{\ensuremath{\mathbbm{e}}}}{{{\biggl[{{\ensuremath{\bar{f}}}}{{{\biggl(\bigl ( \int { { { \bigl({{\ensuremath{\bar{f}}}}{\ensuremath{{\displaystyle \cdot}}}h_k\bigr)}}}{{{\bigl(\eta_{t - u}\bigr)}}}\pi(du , d\eta)\bigr)_{s\leq t\leq t}\biggr)}}}\biggr]}}}\\ & = { { \ensuremath{\mathbbm{e}}}}{{{\biggl[{{\ensuremath{\bar{f}}}}{{{\biggl(\bigl(\int { { \ensuremath{\bar{f}}}}{{{\bigl(\eta_{t - u}\bigr)}}}\pi(du , d\eta)\bigr)_{s\leq t\leq t}\biggr)}}}\biggr]}}}. \end{split}\ ] ] the last equality follows from the dominated convergence theorem together with assumption [ a : hutzenthaler2009ejp ] .this proves .recall the loop - free -island process from .the following lemma shows that the loop - free -island process converges to the virgin island model .[ l : convergence_of_the_loop_free_process ] assume [ a : a1_n ] , [ a : hutzenthaler2009ejp ] , [ a : initial ] and [ a : second_moments ] .then we have that for every . fix .let be a countable dense subset of .tightness of the left - hand side of in follows from tightness of this type of argument has been established in theorem 2.1 of roelly - coppoletta ( 1986 ) for the weak topology and . following the proof hereof, one can show the analogous argument for the vague topology and .fix and define for all and . for fixed ,global lipschitz continuity of implies that } } } \leq \frac{l_f}{k } \sup_{m\in{{\ensuremath{\mathbbm{n}}}}}{{\ensuremath{\mathbbm{e}}}}{{{\biggl[\sum_{i=1}^m\sum_{k=0}^\infty z_{t}^{m , k}(i)\biggr]}}}\ ] ] for some constant and for all .the right - hand side is finite according to lemma [ l : first_moment_estimate_x_z ] .this proves tightness of for every fixed time point . forthe second part of the aldous criterion , let , , be stopping times which are uniformly bounded by .in addition define and for all .assumption [ a : a1_n ] implies that the functions , and are uniformly globally lipschitz continuous on the support of .moreover is bounded by uniformly in .therefore there exists a constant be such that for all and all and such that for all and . for fixed and ] .note that satisfies the lipschitz condition for some constant .let , , and , , be independent solutions of with , and for every and .in addition let , , be independent solutions of with .note that is a solution of with .the first moment estimate of lemma [ l : first_moment_estimate ] implies that } } } - { { \ensuremath{\mathbbm{e}}}}{{{\biggl[\exp{{{\biggl(-\sum_{i=1}^n f{{{\bigl({{{\bigl(y_{t,0}^{n,\zeta}(i)\bigr)}}}_{t\geq0}\bigr)}}}\biggr)}}}\biggr]}}}\bigr|}}}\\ & \leq l_f c_{t_n}\sum_{j=1}^n { { \ensuremath{\mathbbm{e}}}}{{{\biggl[\sum_{i=1}^n{{\ensuremath{\bigl|x_0^n(\pi_i^n)-x_0(i)\bigr|}}}\biggr ] } } } \end{split}\ ] ] for all where is the finite constant of lemma [ l : first_moment_estimate ] . letting right - hand side converges to zero according to assumption [ a : initial ] .the process in turn is close to except for islands with a significant amount of mass at time zero . formalizing this we use lemma [ l : first_moment_estimate ] to obtain that } } } - { { \ensuremath{\mathbbm{e}}}}{{{\biggl[\exp{{{\biggl(-\sum_{i=1}^n f{{{\bigl({{{\bigl({{\ensuremath{\tilde{y}}}}_{t,0}^{n,\zeta}(i)\bigr)}}}_{t\geq0}\bigr)}}}\biggr)}}}\biggr]}}}\bigr|}}}\\ & \leq { { \ensuremath{{\displaystyle \lim_{k { \rightarrow}\infty}}}}}l_f n c_{t_n}\sum_{i = k+1}^\infty { { \ensuremath{\mathbbm{e}}}}{{{\bigl[x_0(i)\bigr ] } } } + { { \ensuremath{{\displaystyle \lim_{k { \rightarrow}\infty}}}}}{{\ensuremath{{\displaystyle \lim_{n { \rightarrow}\infty}}}}}{{\ensuremath{\mathbbm{e}}}}{{{\biggl[1-\exp{{{\biggl(-\sum_{i=1}^k f{{{\bigl({{{\bigl({{\ensuremath{\tilde{y}}}}_{t,0}^{n,\zeta}(i)\bigr)}}}_{t\geq0}\bigr)}}}\biggr)}}}\biggr]}}}. \end{split}\ ] ]the first summand on the right - hand side is zero according to assumption [ a : initial ] .note that and as by assumption [ a : a1_n ] .thus converges in distribution to the zero function as for every fixed .consequently the second summand on the right - hand side is zero .moreover converges in distribution to as for every fixed .these observations imply that }}}\\ & = { { \ensuremath{{\displaystyle \lim_{k { \rightarrow}\infty}}}}}{{\ensuremath{{\displaystyle \lim_{n { \rightarrow}\infty}}}}}{{\ensuremath{\mathbbm{e}}}}{{{\biggl[\exp{{{\biggl(-\sum_{i=1}^k f{{{\bigl({{{\bigl(y_{t,0}^{n,\zeta}(i)\bigr)}}}_{t\geq0}\bigr)}}}\biggr)}}}\biggr ] } } } { { \ensuremath{\mathbbm{e}}}}{{{\biggl[\exp{{{\biggl(-\sum_{i = k+1}^n f{{{\bigl({{{\bigl(y_{t,0}^{n,\zeta}(i)\bigr)}}}_{t\geq0}\bigr)}}}\biggr)}}}\biggr]}}}\\ & = { { \ensuremath{{\displaystyle \lim_{k { \rightarrow}\infty}}}}}{{\ensuremath{\mathbbm{e}}}}{{{\biggl[\exp{{{\biggl(-\sum_{i=1}^k f{{{\bigl({{{\bigl(y_t(i)\bigr)}}}_{t\geq0}\bigr)}}}\biggr)}}}\biggr ] } } } { { \ensuremath{{\displaystyle \lim_{n { \rightarrow}\infty}}}}}{{\ensuremath{\mathbbm{e}}}}{{{\biggl[\exp{{{\biggl(-\sum_{i=1}^n f{{{\bigl({{{\bigl({{\ensuremath{\tilde{y}}}}_{t,0}^{n,\zeta}(i)\bigr)}}}_{t\geq0}\bigr)}}}\biggr)}}}\biggr]}}}\\ & = { { \ensuremath{\mathbbm{e}}}}{{{\biggl[\exp{{{\biggl(-\sum_{i=1}^\infty f{{{\bigl({{{\bigl(y_t(i)\bigr)}}}_{t\geq0}\bigr)}}}\biggr)}}}\biggr ] } } } { { \ensuremath{\mathbbm{e}}}}{{{\biggl[\exp{{{\biggl(-\!\!\sum_{(s,\eta)\in\pi^{\emptyset } } f{{{\bigl((\eta_{t - s})_{t\geq0}\bigr ) } } } \biggr ) } } } \biggr]}}}\\ & = { { \ensuremath{\mathbbm{e}}}}{{{\biggl[\exp{{{\bigl(-\!\!\sum_{(\iota , s,\eta)\in{{\ensuremath{\mathcal v}}}^{(0 ) } } f{{{\bigl((\eta_{t - s})_{t\geq0}\bigr ) } } } \bigr ) } } } \biggr]}}}. \end{split}\ ] ] the last but one step follows from lemma [ l : vanishing_immigration_weak_process ] with and .this proves in the base case . for the induction step note that a version of conditioned on , , is given by the one - dimensional diffusion with vanishing immigration .thus we may realize by choosing a suitable version of , , and by independently sampling a version of whose driving brownian motion is independent of .tightness together with the induction hypothesis implies that for every .thanks to the skorokhod representation of weak convergence ( e.g. theorem ii.86.1 in ) , we may assume that the convergence in holds almost surely . as a consequencewe obtain that holds almost surely . using arguments from the proof of onecan deduce from this that holds almost surely where the total mass of the -th generation of the virgin island model is defined as for every . together with continuity of , implies that that this functional is not bounded is remedied by a truncation argument .now the main step of the proof is lemma [ l : vanishing_immigration_weak_process ] with and for all .lemma [ l : vanishing_immigration_weak_process ] implies that }}}\\ & = { { \ensuremath{\mathbbm{e}}}}{{{\bigl[\exp{{{\bigl(-\sum_{i=1}^n f{{{\bigl({{{\bigl(y_{t,0}^{n,\zeta } \bigr)}}}_{t\leq t } \bigr ) } } } \bigr ) } } } \big|{{{\bigl(z_\cdot^{m , m}\bigr)}}}_{m\in{{\ensuremath{\mathbbm{n } } } } } \bigr]}}}\\ & { \longrightarrow}{{\ensuremath{\mathbbm{e}}}}{{{\bigl[\exp{{{\bigl(-\int f{{{\bigl((\eta_{t - s})_{t\leq t}\bigr)}}}\pi^{(m+1)}(dx , d\eta)\bigr)}}}\big| { { \ensuremath{\mathcal v}}}^{(m)}\bigr]}}}{\ensuremath{\qquad\text{as } n\to\infty}}\end{split}\ ] ] almost surely . here conditioned on is a poisson point process on with intensity measure } } } = v_s^{(m)}\,ds\otimes q(d\eta)\\ & = \!\!\!\sum_{(\iota , r,\psi)\in{{\ensuremath{\mathcal v}}}^{(m)}}\!\!\!\psi_{s - r}ds\otimes q(d\eta ) = \!\!\!\sum_{(\iota , r,\psi)\in{{\ensuremath{\mathcal v}}}^{(m)}}\!\!\!{{\ensuremath{\mathbbm{e}}}}\pi^{(\iota , r,\psi)}(ds , d\eta ) .\end{split}\ ] ] due to this decomposition of , we may realize conditioned on as the independent superposition of .in other words , is equal in distribution to the - generation of the virgin island model. therefore we get that } } } } \\ & = { { \ensuremath{\mathbbm{e}}}}{{{\biggl\{{{\ensuremath{{\displaystyle \lim_{n { \rightarrow}\infty}}}}}\exp{{{\bigl(-\sum_{k=0}^{m}\sum_{i=1}^n f{{{\bigl(z_{\cdot}^{n , k}(i)\bigr ) } } } \bigr ) } } } { { \ensuremath{\mathbbm{e}}}}{{{\bigl[\exp{{{\bigl(-\sum_{i=1}^n f{{{\bigl(z_{\cdot}^{n , m+1}(i)\bigr ) } } } \bigr)}}}\big|{{{\bigl(z_\cdot^{m , m}\bigr)}}}_{m\in{{\ensuremath{\mathbbm{n } } } } } \bigr ] } } } \biggr\}}}}\\ & = { { \ensuremath{\mathbbm{e}}}}\biggl\{\exp{{{\biggl(-\sum_{k=0}^m\sum_{(\iota , s,\eta)\in{{\ensuremath{\mathcal v}}}^{(k ) } } \!\!\ ! f{{{\bigl((\eta_{t - s})_{t\leq t}\bigr ) } } } \biggr ) } } } { { \ensuremath{\mathbbm{e}}}}{{{\bigl[\exp{{{\bigl(-\!\!\!\sum_{(\iota , s,\eta)\in{{\ensuremath{\mathcal v}}}^{(m+1)}}\!\!\ ! f{{{\bigl((\eta_{t - s})_{t\leq t}\bigr)}}}\bigr)}}}\big| { { \ensuremath{\mathcal v}}}^{(m ) } \bigr]}}}\biggr\}\\ & = { { \ensuremath{\mathbbm{e}}}}{{{\biggl[\exp{{{\bigl(-\sum_{k=0}^{m+1}\sum_{(\iota , s,\eta)\in{{\ensuremath{\mathcal v}}}^{(k ) } } f{{{\bigl((\eta_{t - s})_{t\leq t}\bigr ) } } } \bigr ) } } } \biggr ] } } } \end{split}\ ] ] which proves and completes the proof of lemma [ l : convergence_of_the_loop_free_process ] . in this subsection , we show that the -process with migration levels and the loop - free -process are identical in the limit .our proof formalizes the following intuition .the individuals of a certain migration level are concentrated on essentially finitely many islands . that these finitely many islands are populated by migrants of a different migration level has a probability of order . as a consequence ,all individuals on one fixed island have the same migration level in the limit .this intuition is subject of lemma [ l : one_generation_per_island ] .first we show that a generation can not be dispersed uniformly over all islands . to obtain this interpretation from the following lemma ,assume for all and some time .then the cutting operation in has no effect for large enough .however it is clear that the total mass of all individuals with migration level does not tend to zero as .thus can not be true .[ l : concentration ] assume [ a : a1_n ] , [ a : initial ] and [ a : second_moments ]. then any solution of satisfies } } } { \xrightarrow{{{\ensuremath{\delta}}}{\rightarrow}0}}0\ ] ] for all .the assertion is also true if is replaced by .fix and ] if we first let and then .this follows from lemma [ l : one_generation_per_island ] and lemma [ l : concentration ] . after inserting into, we see that for all ] and which are uniformly globally lipschitz continuous and uniformly bounded .existence of such functions follows from the uniform local lipschitz continuity of and .recall the stopping time , , from .the process agrees with the loop - free -process on the event .furthermore the process agrees with an -process with migration levels on .therefore for every by the preceding step .lemma [ l : tau_theta ] handles the event . this completes the proof of lemma [ l : x_close_to_z ] .first we prove theorem [ thm : convergence ] under the additional assumption [ a : second_moments ] .this will be relaxed later .we begin with convergence of finite - dimensional distributions .recall from .let satisfy the lipschitz condition with lipschitz constant and let be bounded by . furthermore let the function have compact support in , let be bounded by and let be globally lipschitz continuous with lipschitz constant .recall the -process with migration levels from .we will exploit below that all individuals on one island have the same migration level in order to show that assuming we now prove convergence of finite - dimensional distributions .we may replace the -process with migration levels in by the loop - free process because of lemma [l : x_close_to_z ] and the lipschitz continuity of and .hence and lemma [ l : x_close_to_z ] imply that the last equality is the convergence of the loop - free -process to the virgin island model and has been established in lemma [ l : convergence_of_the_loop_free_process ] .next we prove . according to lemma [ l : decomposition ]if we ignore the migration levels in the -process with migration levels , then we obtain a version of the -process , that is , for proving we observe that for every sequence and every .thus we get that } } } + c_f\sum_{i=1}^n\frac{1}{{{\ensuremath{\delta}}}^2 } { { \ensuremath{\mathbbm{e}}}}{{{\biggl[\sum_{k=0}^\infty x_{t}^{n , k}(i ) \sum_{l\neq k } x_{t}^{n , l}(i)\biggr]}}}\\ & = : c(n,{{\ensuremath{\delta}}},t ) \end{split}\ ] ] for all , and .the second summand on the right - hand side converges to zero as according to lemma [ l : one_generation_per_island ] .the first summand on the right - hand side converges to zero as uniformly in according to lemma [ l : concentration ] . usingwe obtain that }}}\\ & \qquad+\sum_{j=1}^n c(n,{{\ensuremath{\delta}}},t_j ) + \sum_{j=1}^n{{\ensuremath{\mathbbm{e}}}}{{{\biggl[\sum_{i=1}^n\sum_{k=0}^\infty{{\ensuremath{\mathbbm{1}}}}_{x_{t_j}^{n , k}(i)<{{\ensuremath{\delta } } } } \bigl| f{{{\bigl(x_{t_j}^{n , k}(i)\bigr)}}}\bigr|\biggr]}}}\\ & \leq \frac{l_f}{{{\ensuremath{\delta}}}}\sum_{j=1}^n { { \ensuremath{\mathbbm{e}}}}{{{\biggl[\sum_{i=1}^n\sum_{k=0}^\infty x_{t_j}^{n , k}(i ) \sum_{m\neq k}x_{t_j}^{n , m}(i)\biggr]}}}\\ & \qquad+\sum_{j=1}^n c(n,{{\ensuremath{\delta}}},t_j ) + l_f\sum_{j=1}^n { { \ensuremath{\mathbbm{e}}}}{{{\biggl[\sum_{i=1}^n\sum_{k=0}^\infty x_{t_j}^{n , k}(i)\wedge{{\ensuremath{\delta}}}\biggr ] } } } \end{split}\ ] ] for all and . letting first and then , the right - hand side converges to zero according to lemmas [ l : one_generation_per_island ] and [ l : concentration ] and according to the preceding step .inserting this into proves . the next step is to prove tightnessthis is analogous to the tightness proof in lemma [ l : vanishing_immigration_weak_process ] .use the lemmas [ l : tau_theta ] and [ l : second_moment_estimate_x ] instead of lemma [ l : y_2_supsum ] .so we omit this step .it remains to prove theorem [ thm : convergence ] in the case when assumption [ a : second_moments ] fails to hold .fix .let be a bounded continuous function on .it follows from assumption [ a : initial ] that converges in and thus also in distribution to . by the skorokhod representation theoremthere exists a version of such that converges almost surely to as .now the previous step implies that } } } { \xrightarrow{n { \rightarrow}\infty}}{{\ensuremath{\mathbbm{e}}}}{{{\biggl[h\biggl(\biggl(\sum_{(\iota , u,\eta)\in{{\ensuremath{\mathcal v}}}}{{\ensuremath{\delta}}}_{\eta_{t - u } } \biggr)_{t\leq t}\biggr)|x_0(\cdot)\biggr]}}}\ ] ] almost surely .taking expectations and applying the dominated convergence theorem results in } } } { \xrightarrow{n { \rightarrow}\infty}}{{\ensuremath{\mathbbm{e}}}}{{{\biggl[h\biggl(\biggl(\sum_{(\iota , u,\eta)\in{{\ensuremath{\mathcal v}}}}{{\ensuremath{\delta}}}_{\eta_{t - u}}\biggr)_{t\leq t}\biggr)\biggr]}}}\ ] ] almost surely .this finishes the proof of theorem [ thm : convergence ] .as in section [ sec : convergence_to_vim ] , we proceed in several steps .first we define the loop - free process .let be the solution of where we agree on for and .we will refer to this process as the loop - free -process .the main two steps in our proof of theorem [ thm : comparison ] are as follows .lemma [ l : z_dominates_x ] below shows that the total mass of the -process is dominated by the total mass of the loop - free -process .lemma [ l : v_dominates_z ] then proves that the total mass of the loop - free -process is dominated by the total mass of the virgin island model .our proof of lemma [ l : v_dominates_z ] exploits the hierarchical structure of the loop - free process .note that conditioned on migration level , the islands with migration level are independent one - dimensional diffusions .we prepare this in subsection [ ssec : decomposition of a one - dimensional diffusion with immigration into subfamilies ] by studying the one - dimensional time - inhomogeneous diffusion where and .the path will later represent the mass immigrating from lower migration levels .the core of the comparison result is the following generator calculation which manifests the intuition that separating mass onto different islands increases the total mass .if is constant , then a formal generator of is where , see e.g. section 5.3 in . recall from .[ l : generator_estimate ] assume [ a : a1 ] .suppose that satisfy .assume to be subadditive , that is , for all with .let .if is superadditive , then if is subadditive , then if is additive , then for , the first derivative is nonnegative and the second derivative is nonpositive .thus this is inequality .the proof of inequality is analogous .if is additive , then and no property of is needed in the above calculation .note that the operator on the right - hand side of is a formal generator of the superposition of two independent solutions of .this follows from theorem 4.10.1 in .we will lift inequality between formal generators to an inequality between the associated semigroups . for thiswe use the integration by parts formula . for its formulation ,let and be two generators associated with the semigroups and , respectively .then , for , we have that if for all , see p. 367in liggett ( 1985 ) .the idea of using for a comparison is borrowed from cox et al .( 1996 ) . as the generator inequality holds for functions in , we need to show that the semigroup of preserves . this is subject of the following subsection .we write for and .the -th unit row vector is denoted as for every .recall from .[ l : delete ] for every and , we have that for all , all , all and all .the reverse inequality holds if is replaced by .the proof is by induction on .the base case is trivial .now assume that holds for some .fix , and .applying the induction hypothesis at location to the index tuple , we obtain that the last step is again the induction hypothesis .[ l : double_argument ] let , and .then the two functions and are elements of .this is also true if is replaced by and , respectively .the functions and are non - decreasing and either bounded or nonnegative .it is clear that is again -convex for and that is -convex for .it remains to prove -convexity of for .applying lemma [ l : delete ] at location to the index tuple , we obtain for all that that is , is -convex .[ l : preservation ] assume [ a : a1 ] .let and .then the function is an element of for every .if is concave , then this property still holds if is replaced by and if is replaced by , respectively .fix and .we only prove the case of being concave and as the remaining cases are similar . according to lemma [ l : double_argument ], it suffices to prove that is an element of .let , , be solutions of with respect to the same brownian motion .it is known that holds almost surely for all , see e.g. theorem v.43.1 in for the time - homogeneous case .thus the function is again non - decreasing .moreover inherits -convexity from for every .it remains to show that is -convex for .if , then -convexity of at the point shows that for every and , that is , -convexity of in the case .one can establish convexity of as in lemma 6.1 of ( this lemma 6.1 shows concavity if is -concave and smooth ) .this step uses concavity of .consequently , is -convex .this completes the proof of .lemma [ l : preservation ] extends proposition 16 of cox et al .( 1996 ) .this proposition 16 is used in to establish a comparison result between diffusions with different diffusion functions , see theorem 1 in . using the above lemma [ l : preservation ], this comparison result can be extended to more general test functions .feller s branching diffusion with immigration can be decomposed into independent families which originate either from an individual at time zero or from an immigrant , see e.g. theorem 1.3 in li and shiga ( 1995 ) .a diffusion does in general not agree with its family decomposition if individuals interact with each other , e.g. if the branching rate depends on the population size .if the drift function is subadditive and if the branching function is superadditive , however , then we get at least a comparison result .in that situation , the diffusion is dominated by its family decomposition .more precisely , the total mass increases in the order if we let all subfamilies evolve independently , see lemma [ l : family_decomposition ] below .the following lemma is a first step in this direction .[ l : semigroup_estimate ] assume [ a : a1 ] .let and let be locally lebesgue integrable . if is concave and is superadditive , then if is concave and is subadditive , then inequality holds with replaced by .if is subadditive and is additive , then inequality holds with replaced by .let where .we begin with the case of being simple functions . w.l.o.g .we consider and where , and as we may let depend trivially on further time points .we will prove by induction on that for the base case additionally assume .approximate and with functions having the following properties .all derivatives , , , are bounded , is concave and is superadditive .both functions vanish at zero . if , then .moreover and as for all .let be a solution of with and replaced by and , respectively , and let be an independent version hereof starting in .then is twice continuously differentiable for every , see theorem 8.4.3 in gikhman and skorokhod ( 1969 ) . in addition, lemma [ l : preservation ] proves for all ] and the integration by party formula yields that now as , converges weakly to for every , see lemma 19 in cox et al .( 1996 ) for a sketch of the proof . therefore letting inproves for if . the case of general follows by approximating with smooth functions in . for the induction step ,define note that the induction hypothesis implies that and that lemma [ l : preservation ] implies that .therefore , using the markov property and the induction hypothesis , we get that }}}\biggr]}}}\\ & = { { \ensuremath{\mathbbm{e}}}}f_{n+1}{{{\biggl({{{\bigl(y_{t , s}^{\zeta , x}+{{\ensuremath{\tilde{y}}}}_{t , s}^{{\ensuremath{\tilde{\zeta}}},y}\bigr)}}}_{t\geq s}\biggr ) } } } \end{split}\ ] ] for all satisfying .the last step follows from the markov property and from independence of the two processes .this proves . in case of general functions and , approximate and with simple functions and , , respectively .the process converges in the sense of finite - dimensional distributions in , see lemma [ l : second_moment_estimate_y ] , and due to tightness also weakly to the process .this completes the proof as the remaining cases are analogous .[ l : small_initial_mass_weak ] assume [ a : a1 ] and [ a : hutzenthaler2009ejp ] .then we have that for all and all where is a poisson point process on with intensity measure .the proof is analogous to the proof of lemma [ l : vanishing_immigration_weak_process ] .for convergence of finite - dimensional distributions use the convergence instead of lemma [ l : vanishing_immigration_weak_process ] .tightness follows from an estimate as in together with boundedness ( see lemma 9.9 in ) of second moments .finally we prove the main result of this subsection .the following lemma shows that the total mass increases if we let all subfamilies evolve independently . in the special case of and being linear , inequalityis actually an equality according to the classical family decomposition of feller s branching diffusion with immigration .[ l : family_decomposition ] assume [ a : a1 ] .let be locally lebesgue integrable and let . if the drift function is concave and the diffusion function is superadditive , then for every where is a poisson point process on with intensity measure and where is an independent poisson point process on with intensity measure .if is concave and is subadditive , then holds with replaced by . if is subadditive and is additive , then holds with replaced by idea is to split the initial mass and the immigrating mass into smaller and smaller pieces .fix .let be concave and let be superadditive . according to lemma [ l : semigroup_estimate ] for every where all processes are independent of each other . letting in, the right - hand side of converges to the right - hand side of , see lemma [ l : vanishing_immigration_weak_process ] and lemma [l : small_initial_mass_weak ] .the remaining cases are analogous .[ l : z_dominates_x ] assume [ a : a1 ] .if is concave and is superadditive , then if is concave and is subadditive , then inequality holds with replaced by . if is subadditive and is additive , then inequality holds with replaced by .assume that is subadditive and that is superadditive .we follow the proof of lemma [ l : semigroup_estimate ] and begin with a generator calculation similar to lemma [ l : generator_estimate ] .let and denote the formal generators of -process and of the loop - free -process , respectively .assume that depends only on finitely many coordinates .associated with this test function is where .note that .the first partial derivatives , , are nonnegative and the second partial derivatives are nonpositive .thus we see that } } } + \frac12\sum_{i\in g}f_{ii}{\ensuremath{{\displaystyle \cdot}}}\sigma^2{{{\bigl(\sum_{k=0}^\infty x_i^{(k)}\bigr)}}}\\ & \leq \sum_{k=0}^\infty{{{\biggl [ \sum_{i\in g}f_i{{{\bigl ( \sum_{j\in g}x_j^{(k)}m(j , i)-x_i^{(k ) } + \mu{{{\bigl(x_i^{(k)}\bigr ) } } } \bigr ) } } } + \frac12\sum_{i\in g}f_{ii}{\ensuremath{{\displaystyle \cdot}}}\sigma^2{{{\bigl ( x_i^{(k ) } \bigr ) } } } \biggr]}}}\\ & = \sum_{k=0}^\infty{{{\biggl [ \sum_{i\in g}f_i{{{\bigl ( \sum_{j\in g}x_j^{(k-1)}m(j , i)-x_i^{(k ) } + \mu{{{\bigl(x_i^{(k)}\bigr ) } } } \bigr ) } } } + \frac12\sum_{i\in g}f_{ii}{\ensuremath{{\displaystyle \cdot}}}\sigma^2{{{\bigl ( x_i^{(k ) } \bigr ) } } } \biggr]}}}\\ & = { { \ensuremath{{{\ensuremath{\mathcal g}}}}}}^z { { \ensuremath{\tilde{f}}}}{{{\bigl({{{\bigl(x_i^{(k)}\bigr)}}}_{i\in g , k\in{{\ensuremath{\mathbbm{n}}}}_0}\bigr ) } } } \end{split}\ ] ] for every .now we wish to apply the integration by parts formula . in order to guarantee ,approximate with smooth functions in , approximate and as in the proof of lemma [ l : semigroup_estimate ] and approximate with finite sets .moreover in order to exploit the generator inequality in the integration by parts formula , we note that ( see lemma 6.1 in ) .therefore the integration by parts formula together with inequality implies that for all .in addition note that is stochastically non - decreasing in its initial configuration , see lemma 3.3 in .as stochastic monotonicity is the only input into the proof of lemma [ l : preservation ] , the assertion of lemma [ l : preservation ] holds also for .thus , for all , we have that using this , the assertion follows as in the proof of lemma [ l : semigroup_estimate ] by induction on the number of arguments of .we show in lemma [ l : v_dominates_z ] below that the total mass of the loop - free process is dominated by the total mass of the virgin island model . in the proof of this lemma, we use that the poisson point processes appearing in the definition of the virgin island model preserve convexity in a suitable way .this is subject of the following lemma .[ l : preservation_ppp ] for every vector , , let be a poisson point process on a polish space with intensity measure where are fixed measures on .if , , then the function is an element of for every measurable test function .analogous results hold if is replaced by and , respectively .the function is still non - decreasing in the first variables and -convex for all .furthermore is non - decreasing in the last variables as is stochastically non - decreasing in .fix and .let the poisson point processes , and be independent .fix , and a measurable test function .note that has the same distribution as .therefore , using -convexity of in the point , we obtain that for the last step , note that has the same distribution as and that has the same distribution as .this proves -convexity of for all .similar arguments prove -convexity for and .recall the total mass process of the -th generation of the virgin island model from for every .[ l : v_dominates_z ] assume [ a : a1 ] and [ a : migration ] .if is concave and is superadditive , then for every .if is concave and is subadditive , then inequality holds with replaced by .if is subadditive and is additive , then inequality holds with replaced by .we prove by induction on .the base case follows from .we apply lemma [ l : family_decomposition ] for the induction step .fix where and .let , , be independent poisson point processes on , independent of and with where has intensity measure note that conditioned on , the law of is equal to the law of ( defined in ) where , , are independent of each other . thus conditioning on and applying lemma [ l : family_decomposition ] with and , we obtain that } } } } \\ & = { { \ensuremath{\mathbbm{e}}}}{{{\biggl\{{{\ensuremath{\mathbbm{e}}}}{{{\biggl[f_{n+1}{{{\bigl({{{\bigl({{\ensuremath{\bigl|z^{(k)}\bigr|}}}\bigr)}}}_{k=0,\ldots , k_0 } , \sum_{i\in g } y_{\cdot,0}^{\zeta_i^{(k_0)},0}(i)\bigr ) } } } \bigm|{{{\bigl(z^{(k)}\bigr)}}}_{k=0\ldots k_0 } \biggr ] } } } \biggr\}}}}\\ & \leq { { \ensuremath{\mathbbm{e}}}}{{{\biggl [ f_{n+1}{{{\bigl({{{\bigl({{\ensuremath{\bigl|z^{(k)}\bigr|}}}\bigr)}}}_{k=0,\ldots , k_0 } , \sum_{i\in g } \int_0^\infty \int\eta_{\cdot - u}\pi^{i,\zeta_i^{(k_0)}}(du , d\eta ) \bigr ) } } } \biggr]}}}\\ & = { { \ensuremath{\mathbbm{e}}}}{{{\biggl [ f_{n+1}{{{\bigl({{{\bigl({{\ensuremath{\bigl|z^{(k)}\bigr|}}}\bigr)}}}_{k=0,\ldots , k_0 } , \int_0^\infty \int\eta_{\cdot - u}\pi^{\sum_{i\in g } \zeta_i^{(k_0)}}(du , d\eta ) \bigr ) } } } \biggr]}}}\\ & \leq { { \ensuremath{\mathbbm{e}}}}{{{\biggl [ f_{n+1}{{{\bigl({{{\bigl({{\ensuremath{\bigl|z^{(k)}\bigr|}}}\bigr)}}}_{k=0,\ldots , k_0 } , \int_0^\infty \int\eta_{\cdot - u}\pi^{{{\ensuremath{|z^{(k_0)}|}}}}(du , d\eta ) \bigr ) } } } \biggr]}}}. \end{split}\ ] ] the last inequality follows from , where we used from assumption [ a : migration ] , and where we used that is non - decreasing .next we would like to apply the induction hypothesis .however the right - hand side of depends on through a continuum of time points and not only through finitely many time points . to remedy this , we approximate the poisson point process on the right - hand side of by approximating with simple functions . for each , choose a discretization of $ ] of maximal width such that , and .define if and otherwise . for a path ,define note that for every as .thus the intensity measure converges to as .this convergence of the intensity measures implies weak convergence of the poisson point process to the poisson point process . due to lemma[ l : preservation_ppp ] , the function defined through is an element of .now we apply the induction hypothesis and obtain that }}}\\ & = { { \ensuremath{{\displaystyle \lim_{m { \rightarrow}\infty}}}}}{{\ensuremath{\mathbbm{e}}}}{{{\biggl [ f{{{\bigl({{{\bigl({{\ensuremath{\bigl|z^{(k)}\bigr|}}}\bigr)}}}_{k=0,\ldots , k_0 } , \int_0^\infty \int\eta_{\cdot - u}\pi^{d_m{{\ensuremath{|z^{(k_0)}|}}}}(du , d\eta ) \bigr ) } } } \biggr]}}}\\ & \leq{{\ensuremath{{\displaystyle \lim_{m { \rightarrow}\infty}}}}}{{\ensuremath{\mathbbm{e}}}}{{{\biggl [ f{{{\bigl({{{\bigl({v^{(k)}}\bigr)}}}_{k=0,\ldots , k_0 } , \int_0^\infty \int\eta_{\cdot - u}\pi^{d_m{v^{(k_0)}}}(du , d\eta ) \bigr ) } } } \biggr]}}}\\ & = { { \ensuremath{\mathbbm{e}}}}{{{\biggl [ f{{{\bigl({{{\bigl({v^{(k)}}\bigr)}}}_{k=0,\ldots , k_0 } , \int_0^\infty \int\eta_{\cdot - u}\pi^{{v^{(k_0)}}}(du , d\eta ) \bigr ) } } } \biggr]}}}\\ & = { { \ensuremath{\mathbbm{e}}}}{{{\biggl [ f{{{\bigl({{{\bigl({v^{(k)}}\bigr)}}}_{k=0,\ldots , k_0 } , v^{(k_0 + 1 ) } \bigr ) } } } \biggr]}}}. \end{split}\ ] ] putting and together completes the induction step .we prove the case of being concave and being superadditive .the remaining two cases are analogous . according to lemma[ l : z_dominates_x ] we have that for every finite subset . letting , we see that the total mass of the -process is dominated by the total mass of the loop - free -process .now we get from lemma [ l : v_dominates_z ] that for every . letting , we obtain that the total mass of the loop - free -process is dominated by the total mass of the virgin island model .therefore , the total mass of the -process is dominated by the total mass of the virgin island model .i thank anton wakolbinger for inspiring discussions and valuable remarks .mark kac . ,volume 1957 of _ with special lectures by g. e. uhlenbeck , a. r. hibbs , and b. van der pol .lectures in applied mathematics .proceedings of the summer seminar , boulder , colo . _ interscience publishers , london - new york , 1959 .henry p. mckean , jr .propagation of chaos for a class of non - linear parabolic equations . in _stochastic differential equations ( lecture series in differential equations , session 7 , catholic univ . , 1967 )_ , pages 4157 .air force office sci .arlington , va . , 1967 .etienne pardoux and anton wakolbinger . from exploration paths to mass excursions variations on a theme of ray and knight . in_ surveys in stochastic processes , proceedings of the 33rd spa conference in berlin , 2009 , j. blath , p. imkeller , s. roelly ( eds . ) , ems 2010 http://www.cmi.univ-mrs.fr/ pardoux / survey070410.pdf _ , 2010 . | we consider systems of interacting diffusions with local population regulation . our main result shows that the total mass process of such a system is bounded above by the total mass process of a tree of excursions with appropriate drift and diffusion coefficients . as a corollary , this entails a sufficient , explicit condition for extinction of the total mass as time tends to infinity . on the way to our comparison result , we establish that systems of interacting diffusions with uniform migration between finitely many islands converge to a tree of excursions as the number of islands tends to infinity . in the special case of logistic branching , this leads to a duality between the tree of excursions and the solution of a mckean - vlasov equation . |
with the advent of the internet , the exponential growth of the world - wide - web and routers confront people with an information overload .we are facing too much data to be able to effectively filter out the pieces of information that are most appropriate for us .a promising way is to provide personal recommendations to filter out the information .recommendation systems use the opinions of users to help them more effectively identify content of interest from a potentially overwhelming set of choices . motivated by the practical significance to the e - commerce and society , various kinds of algorithms have been proposed , such as correlation - based methods , content - based methods , the spectral analysis , principle component analysis , network - based methods , and so on . for a review of current progress , see ref . and the references therein .one of the most successful technologies for recommendation systems , called _ collaborative filtering _ ( cf ) , has been developed and extensively investigated over the past decade .when predicting the potential interests of a given user , such approach first identifies a set of similar users from the past records and then makes a prediction based on the weighted combination of those similar users opinions . despite its wide applications , collaborative filtering suffers from several major limitations including system scalability and accuracy , some physical dynamics , including mass diffusion , heat conduction and trust - based model , have found their applications in personal recommendations .these physical approaches have been demonstrated to be of both high accuracy and low computational complexity . however , the algorithmic accuracy and computational complexity may be very sensitive to the statistics of data sets .for example , the algorithm presented in ref . runs much faster than standard cf if the number of users is much larger than that of objects , while when the number of objects is huge , the advantage of this algorithm vanishes because its complexity is mainly determined by the number of objects ( see ref . for details ) . in order to increase the system scalability and accuracy of standard cf, we introduce a network - based recommendation algorithm with spreading activation , namely sa - cf .in addition , two free parameters , and are presented to increase the accuracy and personality .denoting the object set as and user set as = , a recommendation system can be fully described by an adjacent matrix , where if is collected by , and otherwise . for a given user ,a recommendation algorithm generates a ranking of all the objects he / she has not collected before .based on the user - object matrix , a user similarity network can be constructed , where each node represents a user , and two users are connected if and only if they have collected at least one common object . in the standard cf , the similarity between and can be evaluated directly by a correlation function : where is the degree of user . inspired by the diffusion process presented by zhou _ , we assume a certain amount of resource ( e.g. recommendation power ) is associated with each user , and the weight represents the proportion of the resource , which would like to distribute to . following a network - based resource - allocation process where each user distributes his / her initial resource equally to all the objectshe / she has collected , and then each object sends back what it has received to all the users collected it , the weight ( the fraction of initial resource eventually gives to ) can be expressed as : where denotes the degree object . using the spreading process ,the user correlation network can be constructed , whose edge weight is obtained by eq .( [ equation2 ] ) . for the user - object pair ,if has not yet collected ( i.e. ) , the predicted score , , is given as from the definition of eq.([equation1 ] ) , one can get that , to a target user , all of his neighbors collection information would affect the recommendation results , which is different with the definition reachability . based on the definitions of and , sa - cf can be given .the framework of the algorithm is organized as follows : ( i ) calculate the user similarity matrix based on the spreading approach ; ( ii ) for each user , obtain the score on every object not being yet collected by ; ( iii ) sort the uncollected objects in descending order of , and those in the top will be recommended .to test the algorithmic accuracy and personality , we use a benchmark data - set , namely _ movielens _ .the data consists of 1682 movies ( objects ) and 943 users , who vote movies using discrete ratings 1 - 5 .hence we applied the coarse - graining method previously used in refs . : a movie is set to be collected by a user only if the giving rating is larger than 2 .the original data contains ratings , 85.25% of which are , thus the user - object ( user - movie ) bipartite network after the coarse gaining contains 85250 edges . to test the recommendation algorithms , the data setis randomly divided into two parts : the training set contains 90% of the data , and the remaining 10% of data constitutes the probe . the training set is treated as known information , while no information in the probe set is allowed to be used for prediction .a recommendation algorithm should provide each user with an ordered queue of all its uncollected objects .it should be emphasized that , the length of queue should not be given artificially , because of the fact that the number of uncollected movies for different users are different .for an arbitrary user , if the relation - is in the probe set ( according to the training set , is an uncollected object for ) , we measure the position of in the ordered queue .for example , if there are uncollected movies for , and is the 10th from the top , we say the position of is , denoted by . since the probe entries are actually collected by users , a good algorithm is expected to give high recommendations to them , thus leading to small .therefore , the mean value of the position , ( called _ ranking score _ ) , averaged over all the entries in the probe , can be used to evaluate the algorithmic accuracy : the smaller the ranking score , the higher the algorithmic accuracy , and vice verse . implementing the sa - cf and cf ,the average value of ranking score are and . clearly , under the simplest initial configuration , subject to the algorithmic accuracy ,the sa - cf algorithm outperforms the standard cf . [ 1.0 ] vs. .the black solid and red dash curves represent the performances of sa - cf and cf , respectively .all the data points are obtained by averaging over ten independent runs with different data - set divisions.,title="fig : " ] [ 1.0 ] vs. .the black solid , red dashed and green dotted curves represent the cases with typical length and 50 , respectively .the blue dot line corresponds to the optimal value .all the data points are obtained by averaging over ten independent runs with different data - set divisions.,title="fig : " ] [ 1.0 ] vs. .the black solid , red dashed and green dotted curves represent the cases with typical lengths and 50 , respectively .the blue dot line corresponds to the optimal value .all the data points are obtained by averaging over ten independent runs with different data - set divisions.,title="fig : " ] [ 1.0 ] vs. .the inset shows the relation for larger .clearly , when approaches , the algorithmic accuracy is the same as that of the sa - cf with .all the data points are obtained by averaging over ten independent runs with different data - set divisions ., title="fig : " ]in order to further improve the algorithmic accuracy , we propose two modified methods . similar to the ref . , taking into account the potential role of object degree may give better performance .accordingly , instead of eq .( 2 ) , we introduce a more complicated way to get user - user correlation : where is a tunable parameter .when , this method degenerates to the algorithm mentioned in the last section .the case with weakens the contribution of large - degree objects to the user - user correlation , while will enhance the contribution of large - degree objects .according to our daily experience , if two users and has simultaneously collected a very popular object ( with very large degree ) , it does nt mean that their interests are similar ; on the contrary , if two users both collected an unpopular object ( with very small degree ) , it is very likely that they share some common and particular tastes .therefore , we expect a larger ( i.e. ) will lead to higher accuracy than the routine case .fig.[fig1.1 ] reports the algorithmic accuracy as a function of .the curve has a clear minimum around , which strongly support the above statement .compared with the routine case ( ) , the ranking score can be further reduced by 11.2% at the optimal value .it is indeed a great improvement for recommendation algorithms .besides accuracy , the average degree of all recommended movies and the mean value of hamming distance are taken in account to measure the algorithmic personality .the movies with higher degrees are more popular than the ones with smaller degrees. the personal recommendation should give small to fit the special tastes of different users .fig.[fig1.2 ] reports the average degree of all recommended movies as a function of .one can see from fig.[fig1.2 ] that the average degree is negatively correlated with , thus depressing the recommendation power of high - degree objects gives more opportunity to the unpopular objects .the hamming distance , , is defined by the mean value among any two recommended lists of and , where , is the list length and is the overlapped number of objects in the two users recommended lists .fig.[fig1.3 ] shows the positively correlation between and , in according with the simulation results in fig.[fig1.2 ] , which indicates that depressing the influence of high - degree objects makes the recommendations more personal .the above simulation results indicate that sa - cf outperforms cf from the viewpoints of accuracy and personality . besides the algorithmic accuracy and personality , the computational complexity should also be taken into account . actually , we argue that a better algorithm should simultaneously require less computation and generate higher accuracy .note that , the computational complexity of eq .( 3 ) is very high if the number of user , , is huge . actually , the majority of user - user similarities are very small , which contribute little to the final recommendation. however , those inconsequential items , corresponding to the less similar users , dominate the computational time of eq . ( 3 ) . therefore , we propose a modified algorithm , so - called top- sa - cf , which only considers the most similar users information to any given user .that is to say , in the top- sa - cf , the sum in eq .( 3 ) runs over only the most similar users of . in the process of calculationthe similarity matrix , to each other , we can simultaneously record its most similar users . when , the additional computing time for top- similar users are remarkably shorter than what we can save from the traditional calculation of eq .( 3 ) . moresurprisingly , as shown in fig.[fig1.4 ] , with properly chosen , this algorithm not only reduces the computation , but also enhances the algorithmic accuracy .this property is of practical significance , especially for the huge - size recommender systems . from figures [ fig1.2 ] and [ fig1.3 ], one can find that , to the same range , the anticorrelations between , and are different in different range .maybe there is a phase transition in the anticorrelations . because this paper mainly focuses on the accuracy and personality of the recommendation algorithms , this issuewould be investigated in the future .in this paper , the spreading activation approach is presented to compute the user similarity of the collaborative filtering algorithm , named sa - cf .the basic sa - cf has obviously higher accuracy than the standard cf .ignoring the degree - degree correlation in user - object relations , the algorithmic complexity of sa - cf is , where and denote the average degree of users and objects . correspondingly , the algorithmic complexity of the standard cf is , where the first term accounts for the calculation of similarity between users , and the second term accounts for the calculation of the predictions . in reality , the number of users , , is much larger than the average object degree , , therefore , the computational complexity of sa - cf is much less than that of the standard cf .the sa - cf has great potential significance in practice .furthermore , we proposed two modified algorithms based on sa - cf .the first algorithm weakens the contribution of large - degree objects to user - user correlations , and the second one eliminates the influence of less similar users .both the two modified algorithm can further enhance the accuracy of sa - cf .more significantly , with properly choice of the parameter , top- sa - cf can simultaneously reduces the computational complexity and improves the algorithmic accuracy .a natural question on the presented algorithms is whether these algorithms are robust to other data - sets or random recommendation ?to sa - cf , the answer is yes , because it would get the user similarity more accurately . while to the two modified algorithms ,the answer is no . since both of the two modified algorithms introduced the tunable parameters and , the optimal values of different data - sets are different .the further work would focus on how to find an effective way to obtain the optimal value exactly , then the modified algorithms could be implemented more easily .we are grateful to tao zhou , zoltn kuscsik , yi - cheng zhang and met medo for their greatly useful discussions and suggestions .this work is partially supported by sbf ( switzerland ) for financial support through project c05.0148 ( physics of risk ) , the national basic research program of china ( 973 program no .2006cb705500 ) , the national natural science foundation of china ( grant nos . 60744003 , 10635040 , 10532060 , 10472116 ), gq was supported by the liaoning education department ( grant no .20060140 ) .j. l. herlocker , j. a. konstan , k. terveen , and j. t. riedl , acm trans .* 22 * , 5 ( 2004 ) .j. a. konstan , b. n. miller , d. maltz , j. l. herlocker , l. r. gordon , and j. riedl , commun .acm * 40 * , 77 ( 1997 ) .m. balabanovi and y. shoham , commun .acm * 40 * , 66 ( 1997 ) .m. j. pazzani , artif .intell . rev .* 13 * , 393 ( 1999 ) .d. billsus and m. pazzani , _ proc .intl conf .machine learning _b. sarwar , g. karypis , j. konstan , and j. riedl , _ proc .acm webkdd workshop _ ( 2000 ) .k. goldberg , t. roeder , d. gupta , and c. perkins , inform .ret . * 4 * , 133 ( 2001 ) .zhang , m. blattner , and y .- k .yu , phys .99 * , 154301 ( 2007 ) .c . zhang , m. medo , j. ren , t. zhou , t. li , and f. yang , epl * 80 * , 68003 ( 2007 ) .t. zhou , j. ren , m. medo , and y .- c .zhang , phys .e * 76 * , 046115 ( 2007 ) .t. zhou , l .- l . jiang , r .- q .su , and y .- c .zhang , epl * 81 * , 58004 ( 2008 ) .g. adomavicius and a. tuzhilin , ieee trans .know . & data eng . *17 * , 734 ( 2005 ) .z. huang , h. chen , and d. zeng , acm trans .* 22 * , 116 ( 2004 ) .b. sarwar , g. karypis , j. konstan , and j. reidl , in proceedings of the acm conference on electronic commerce .acm , new york , 158 ( 2000 ) .note that , the ranking score of the standard cf reported here is slightly different from that of the ref . . it is because in this paper , if a movie in the probe set has not yet appeared in the known set , we automatically remove it from the probe ; while the ref . takes into account those movies only appeared in the probe via assigning zero score to them .another alterative way is to automatically move those movies from the probe set to the known set , which guarantees that any target movie in the probe set has been collected by at least one user in the known set .the values of are slightly different for these three implementations , however , the choice of different implementations has no substantial effect on our conclusions . | in this paper , we propose a spreading activation approach for collaborative filtering ( sa - cf ) . by using the opinion spreading process , the similarity between any users can be obtained . the algorithm has remarkably higher accuracy than the standard collaborative filtering ( cf ) using pearson correlation . furthermore , we introduce a free parameter to regulate the contributions of objects to user - user correlations . the numerical results indicate that decreasing the influence of popular objects can further improve the algorithmic accuracy and personality . we argue that a better algorithm should simultaneously require less computation and generate higher accuracy . accordingly , we further propose an algorithm involving only the top- similar neighbors for each target user , which has both less computational complexity and higher algorithmic accuracy . |
in this paper we develop and analyze new methodology for inference about the mean of a stochastic process from data that consists of independent realizations of a stochastic process observed at discrete times , where each observation is contaminated by an additive error term .formally , let be a stochastic process with mean function \ ] ] and covariance function for all .we denote the zero mean process by .we observe at times , for , , that are of the form where , with mean , are random independent realizations of the process .we assume that are independent across and with zero mean and variance = \sigma_{\eps}^2 ] .many of the projection coefficients are close to zero for both bases .we consider the truncation level given above with , , and for given by ( [ var ] ) above corresponding to the brownian bridge process with covariance function .+ figures [ sparsity](b ) and [ sparsity](d ) show versus their index , for the fourier and haar basis respectively . the reconstruction is shown in red in figures [ sparsity](a ) and [ sparsity](c ) , and is very close to in both cases .notice that only 11 coefficients are needed for this good reconstruction of via the fourier basis , versus 92 via the haar basis , as we reconstruct a differentiable function with differentiable and non - differentiable basis functions , respectively .following standard terminology in non - parametric estimation , we refer to the fact that can be reconstructed well via a smaller subset of the given collection of the basis functions by saying that has a sparse representation relative to that basis .+ in what follows we show that the thresholded estimates introduced in the section above adapt to the sparsity of . of course, since is unknown , so is its sparsity relative to a given basis .nevertheless , we show that our estimators adapt to this unknown sparsity in terms of their fit , and refer to these results as oracle inequalities .the type of oracle inequalities that we establish below illustrate that the fit of our estimators depends only on the estimation errors induced by the estimates of the non - zero coefficients of a sparse representation of within a given basis . as our example indicates , and as is the case in any non - parametric estimation problem , the overall quality of our estimator will further depend on the choice of the basis used for estimation .we will therefore complement the construction of our estimator with a basis selection step .we begin by stating our results for a given basis .+ for both hard and soft threshold estimators we obtain estimation bounds on the fit at the observation points .we formulate our results in terms of the empirical supremum norm and norm defined below . for any real function ,let all theorems and results of this article require that <\infty ] .the next three theorems are proved for a given basis and the desired probability .all estimates are based on the threshold level given in ( [ trunc ] ) , for a user specified value of .the following result establishes oracle inequalities for the hard - threshold estimators .define which differs by from defined above in ( [ theolevel ] ) , for a quantity that is arbitrarily close to zero ; this is needed for purely technical reasons , and for all practical purposes and can be considered the same .theorems [ thm1 ] and [ thm2 ] yield immediately results on the performance of these estimators relative to the untruncated .in particular , the following inequality holds with probability at least as : for both estimators and .this follows directly from the above results and the triangle inequality .the first term can be viewed as the approximation error or bias term , whereas the second term represents the estimation error or standard deviation term .the bias term is unavoidable , and its size depends on the basis choice .it suggests the need for an adaptive method , that would select the basis that is best suited for the unknown underlying mean function .we discuss this in section 2.4 below .+ theorems [ thm1 ] and [ thm2 ] are novel type of oracle inequalities for thresholded estimators , as they guarantee the in probability " , rather than on average " , performance of the estimator , at any probability level of interest .these properties hold for our estimates , as they are constructed relative to variable threshold levels that depend on . to the best of our knowledge ,such results are new in the functional data context .they are also new in the general non - parametric settings , where a more traditional way to state the oracle properties of the estimators is in terms of the expected mean squared error , see , for instance , donoho and johnstone ( 1995 , 1998 ) , wasserman ( 2006 ) , tsybakov ( 2009 ) and the references therein . for completeness, we also give an assessment of our estimates in terms of the expected mean squared error in theorem [ thm3 ] below , which restates theorem [ thm1 ] in terms of expected values .to avoid technical clutter , we consider the toy estimator in lieu of .recall the notation theorem [ thm3 ] shows that the expected mean squared error of our estimator also adapts to the unknown sparsity of , as indicated by the first term in either inequality ( [ gen ] ) or ( [ normal ] ) .the second term in these inequalities is essentially an average of the quantities that constitute the first term .this is more evident from the closed form expression ( [ normal ] ) , and shows that this second term is negligible relative to the first one , especially for small values of .the results of the previous section make it clear that the basis choice influences both the bias and the variance of our estimates ; the type of basis one uses for the fit can be regarded as the tuning parameter of our estimation procedure .we give below a data adaptive procedure of selection and show in theorem [ ds ] below that the estimator based on the selected basis behaves essentially as if the best basis for approximating the unknown was known in advance .+ we select the basis via a cross - validation ( data - splitting ) technique , by randomly dividing the discretized curves in two equally sized groups . the first sample is used for constructing various estimates , say , , based on various bases , choices of and thresholding methods ( hard and soft ) .the second sample ( hold - out or validation sample ) is used to select the optimal estimate that minimizes the empirical risk over . here is the index set for the curves that are set aside to evaluate the estimators and is its cardinality .theorem [ ds ] requires that the process and the random error have moments strictly larger than 2 , which is still a very mild assumption .+ the last term in the right hand side of ( [ sel ] ) is of order , making the sum of the last two terms of order .this can be regarded as the price to pay for using a data adaptive procedure to select the appropriate basis .the factor 2 multiplying the right hand side of ( [ sel ] ) can be reduced to at the cost of increasing the last two terms on the right by a factor proportional to , for arbitrarily close to zero . to avoid notational clutter we opted for using the constant 2 .therefore , theorem [ ds ] shows that the basis selection process yields an estimate that is essentially as good as the best estimate on the list , in terms of expected squared error . since which is best can not be known in advance , as is unknown , the result of theorem [ ds ] can also be regarded as an oracle inequality . in this sectionwe will construct confidence bands for that are uniform over the parameter space .we begin with the confidence band based on a hard threshold estimator given below . set and notice that it differs by from given in ( [ trunc ] ) above .this is again needed for technical reasons , as in practice can be set to zero .[ thm5 ] ( 1 ) for all , and contains with probability at least , as .+ ( 2 ) moreover , if all non - zero coefficients exceed , the band can be made smaller by a factor 3 : contains with probability at least , as .michal benko , wolfgang hrdle and alois kneip .common functional principal components . _annals of statistics _ 37(1 ) , 1 34 , 2009 .+ david donoho and iain johnstone .adapting to the unknown smoothness via wavelet shrinkage ._ journal of the american statistical association _ 90 , 12001224,1995 .+ david donoho and iain johnstone .minimax estimation via wavelet shrinkage ._ annals of statistics _ 26 , 789921 , 1998 .+ christopher genovese and larry wasserman .adaptive confidence bands . _annals of statististics _36(2 ) , 875905 , 2008 . + daniel gervini .free - knot spline smoothing for functional data ._ journal of the royal statistical society , series b _68(4 ) , 671687 , 2006 . +hans - georg mller . functional modelling and classification of longitudinal data ._ scandinavian journal of statistics _ 32 , 223240 , 2005 .+ hans - georg mller , rituparna sen and ulrich stadtmller .functional data analysis for volatility ._ manuscript _ , 2006 .+ ` r ` development core team ( 2008 ) . `r ` : _ a language and environment for statistical computing . _ ` r ` foundation for statistical computing , vienna , austria .isbn 3 - 900051 - 07 - 0 , url http://www.r-project.org .+ james ramsay and bernard silverman ._ functional data analysis _ , 2 edition .springer , new york , 2005 .+ james ramsay and bernard silverman ._ applied functional data analysis . _springer , new york , 2002 .+ john rice and bernard silverman . estimating the mean and covariance structure nonparametrically when the data are curves ._ journal of the royal statistical society , series b _53(1 ) , 233243 , 1991 .+ david ruppert , simon sheather and matthew wand .an effective bandwidth selector for local least squares regression. _ journal of the american statistical association _90(432 ) , 1257 - 1270 , 1995 . + david ruppert , matthew wand and raymond carroll ._ semiparametric regression . _ cambridge university press , cambridge 2003 .+ burkhart seifert , michael brockmann , joachim engel and theo gasser .fast algorithms for nonparametric curve estimation ._ journal of computational and graphical statistics _3(2 ) , 192213 , 1994 . + alexandre b. tsybakov . _introduction to nonparametric estimation . _ springer , new york , 2009 .+ larry wasserman ._ all of nonparametric statistics ._ springer , new york , 2006 .+ marten wegkamp .model selection in nonparametric regression ._ annals of statistics_ 31(1 ) , 252273 , 2003 .+ fang yao .asymptotic distributions of nonparametric regression estimators for longitudinal and functional data ._ journal of multivariate analysis _ 98 , 4056 , 2007 .+ fang yao , hans - georg mller and jane - ling wang .functional data analysis for sparse longitudinal data ._ journal of the american statistical association _100(740 ) , 577590 , 2005 .+ jin - ting zhang and jianwei chen . statistical inferences for functional data ._ annals of statistics_ 35(3 ) , 10521079 , 2007 . | this paper proposes and analyzes fully data driven methods for inference about the mean function of a stochastic process from a sample of independent trajectories of the process , observed at discrete time points and corrupted by additive random error . the proposed method uses thresholded least squares estimators relative to an approximating function basis . the variable threshold levels are estimated from the data and the basis is chosen via cross - validation from a library of bases . the resulting estimates adapt to the unknown sparsity of the mean function relative to the selected approximating basis , both in terms of the mean squared error and supremum norm . these results are based on novel oracle inequalities . in addition , uniform confidence bands for the mean function of the process are constructed . the bands also adapt to the unknown regularity of the mean function , are easy to compute , and do not require explicit estimation of the covariance operator of the process . the simulation study that complements the theoretical results shows that the new method performs very well in practice , and is robust against large variations introduced by the random error terms . + keywords : stochastic processes ; nonparametric mean estimation ; thresholded estimators ; functional data ; oracle inequalities ; adaptive inference ; uniform confidence bands . = 1 |
the statistical significance associated with the detection of a signal source is most often reported in the form of a -value , that is , the probability under the background - only hypothesis of observing a phenomenon as or even more ` signal - like ' than the one observed by the experiment . in many simple situations, a -value can be calculated using asymptotic results such as those given by wilk s theorem , without the need of generating a large number of pseudo - experiments .this is not the case however when the procedure for detecting the source involves a search over some range , for example , when one is trying to observe a hypothetic signal from an astrophysical source that can be located at any direction in the sky .wilk s theorem does not apply in this situation since the signal model contains parameters ( i.e. the signal location ) which are not present under the null hypothesis .estimation of the -value could be then performed by repeated monte carlo simulations of the experiment s outcome under the background - only hypothesis , but this approach could be highly time consuming since for each of those simulations the entire search procedure needs to be applied to the data , and to establish a discovery claim at the level ( -value= ) the simulation needs to be repeated at least times .fortunately , recent advances in the theory of random fields provide analytical tools that can be used to address exactly such problems , in a wide range of experimental settings .such methods could be highly valuable for experiments searching for signals over large parameter spaces , as the reduction in necessary computation time can be dramatic .random field theoretic methods were first applied to the statistical hypothesis testing problem in , for some special case of a one dimensional problem . a practical implementation of this result , aimed at the high - energy physics community ,was made in .similar results for some cases of multi - dimensional problems were applied to statistical tests in the context of brain imaging .more recently , a generalized result dealing with random fields over arbitrary riemannian manifolds was obtained , openning the door for a plethora of new possible applications . herewe discuss the implementation of these results in the context of the search for astrophysical sources , taking icecube as a specific example . in section [ sec1 ]the general framework of an hypothesis test is briefly presented with connection to random fields . in section [ sec2 ] the main theoretical resultis presented , and an example is treated in detail in section [ sec3 ] .the signal search procedure can be formulated as a hypothesis testing problem in the following way .the null ( background - only ) hypothesis , is tested against a signal hypothesis , where represents the signal strength .suppose that are some nuisance parameters describing other properties of the signal ( such as location ) , which are therefore not present under the null .additional nuisance parameters , denoted by , may be present under both hypotheses .denote by the likelihood function .one may then construct the profile likelihood ratio test statistic and reject the null hypothesis if the test statistic is larger then some critical value .note that when the signal strength is set to zero the likelihood by definition does not depend on , and the test statistic ( [ eq : q ] ) can therefore be written as where is the profile likelihood ratio with the signal nuisance parameters fixed to the point , and we have explicitely denoted by the -dimensional manifold to which the parameters belong . under the conditions of wilks theorem , for any fixed point , follows a distribution with one degree of freedom when the null hypothesis is true . when viewed as a function over the manifold , is therefore a _ random field _ , namely a set of random variables that are continuously mapped to the manifold . to quantify the significance of a given observation in terms of a -value , one is required to calculate the probability of the maximum of the field to be above some level ,that is , the excursion probability of the field : .\ ] ] estimation of excursion probabilities has been extensively studied in the framework of random fields . despite the seemingly difficult nature of the problem, some surprisingly simple closed - form expressions have been derived under general conditions , which allow to estimate the excursion probability ( [ eq : pval ] ) when the level is large .such ` high ' excursions are of course the main subject of interest , since one is interested in estimating the -value for apparently significant ( signal - like ) fluctuations .we shall briefly describe the main theoretical results in the following section . for a comprehensive and precise definitions ,the reader is referred to ref .the excursion set of a field above a level , denoted by , is defined as the set of points for which the value of the field is larger than , and we will denote by the _ euler characteristic _ of the excursion set . for a 2-dimensional field , the euler characteristic can be regarded as the number of disconnected components minus the number of ` holes ' , as is illustrated in fig.[fig : eulerillus ] . a fundamental result of states that the expectation of the euler characteristic is given by the following expression : = \sum_{d=0}^d \mathscr{n}_d \rho_d(u).\ ] ] the coefficients are related to some geometrical properties of the manifold and the covariance structure of the field . for the purpose of the present analysis howeverthey can be regarded simply as a set of unknown constants .the functions are ` universal ' in the sense that they are determined only by the distribution type of the field , and their analytic expressions are known for a large class of ` gaussian related ' fields , such as with arbitrary degrees of freedom .the zeroth order term of eq .( [ eq : euler ] ) is a special case for which and are generally given by \ ] ] namely , is the euler characteristic of the entire manifold and is the tail probability of the distribution of the field .( note that when the manifold is reduced to a point , this result becomes trivial ) .when the level is high enough , excursions above become rare and the excursion set becomes a few disconnected hyper - ellipses . in that casethe euler characteristic simply counts the number of disconnected components that make up .for even higher levels this number is mostly zero and rarely one , and its expectation therefore converges asymptotically to the excursion probability .we can thus use it as an approximation to the excursion probability for large enough \approx \mathbb{p } [ \displaystyle\max_{\theta \in \mathscr{m } } q(\theta ) > u].\ ] ] the practical importance of eq .( [ eq : euler ] ) now becomes clear , as it allows to estimate the excursion probabilities above high levels .furthermore , the problem is reduced to finding the constants . since eq .( [ eq : euler ] ) holds for any level , this could be achieved simply by calculating the average of at some low levels , which can be done using a small set of monte carlo simulations .we shall now turn to a specific example where this procedure is demonstrated .the icecube experiment is a neutrino telescope located at the south pole and aimed at detecting astrophysical neutrino sources .the detector measures the energy and angular direction of incoming neutrinos , trying to distinguish an astrophysical point - like signal from a large background of atmospheric neutrinos spread across the sky .the nuisance parameters over which the search is performed are therefore the angular coordinates .we follow for the definitions of the signal and background distributions and the likelihood function .the signal is assumed to be spatially gaussian distributed with a width corresponding to the instrumental resolution of , and the background from atmospheric neutrinos is assumed to be uniform in azimuthal angle .we use a background simulation sample of 67000 events , representing roughly a year of data , provided to us by the authors of .we then calculate a profile likelihood ratio as described in the previous section .figure [ fig : map ] shows a `` significance map '' of the sky , namely the values of the test statistic as well as the corresponding excursion set above . to reduce computation time we restrict here the search space to the portion of the sky at declination angle 27 below the zenith , however all the geometrical features of a full sky searchare maintained .note that the most significance point has a value of the test statistic above 16 , which would correspond to a significance exceeding 4 if this point would have been analyzed alone , that is without the `` look elsewhere '' effect . in practice, the test statistic is calculated on a grid or points , or ` pixels ' , which are sufficiently smaller than the detector resolution .the computation of the euler characteristic can then be done in a straightforward way , using euler s formula : where , , and are respectively the numbers of _ vertices _ ( pixels ) , _ edges _ and _ faces _ making up the excursion set .an edge is a line connecting two adjacent pixels and a face is the square made by connecting the edges of four adjacent pixels .an illustration is given fig.[fig : illus2 ] .( although it is most convenient to use a simple square grid , other grid types can be used if necessary , in which case the faces would be of other polygonal shapes ) . ) .each square represents a pixel . here, the number of vertices is 18 , the number of edges is 23 and the number of faces is 7 , giving . , width=453 ] once the euler characteristic is calculated , the coefficients of eq .( [ eq : euler ] ) can be readily estimated . for a random field with one degree of freedom and for two search dimensions , the explicit form of eq .( [ eq : euler ] ) is given by : = \mathbb{p}[\chi^2 > u ] + e^{-u/2}(\mathscr{n}_1 + \sqrt{u}\mathscr{n}_2).\ ] ] to estimate the unknown coefficients we use a set of 20 background simulations , and calculate the average euler characteristic of the excursion set corresponding to the levels ( the number of required simulations depends on the desired accuracy level of the approximation . for most practical purposes ,estimating the -value with a relative uncertainty of about 10% should be satisfactory . ) .this gives the estimates =33.5 \pm 2 ] . by solving for the unknown coefficients we obtain and .the prediction of eq .( [ eq : ex1 ] ) is then compared against a set of approx . 200,000 background simulations , where for each one the maximum of is found by scanning the entire grid .the results are shown in figure [ fig:2 ] . as expected , the approximation becomes better as the -value becomes smaller .the agreement between eq .( [ eq : ex1 ] ) and the observed -value is maintained up to the smallest -value that the available statistics allows us to estimate . )( dashed red ) against the observed -value ( solid blue ) from a set of 200,000 background simulations .the yellow band represents the statistical uncertainty due to the available number of background simulations ., width=302,height=264 ] a useful property of eq .( [ eq : euler ] ) that can be illustrated by this example , is the ability to consider only a small ` slice ' of the parameter space from which the expected euler characteristic ( and hence -value ) of the entire space can be estimated , if a symmetry is present in the problem .this can be done using the ` inclusion - exclusion ' property of the euler characteristic : since the neutrino background distribution is assumed to be uniform in azimuthal angle ( ) , we can divide the sky to identical slices of azimuthal angle , as illustrated in figure [ fig : slice ] .applying ( [ eq : slicing ] ) to this case , the expected euler characteristic is given by = n\times(\mathbb{e}[\phi(slice)]-\mathbb{e}[\phi(edge ) ] ) + \mathbb{e}[\phi(0)]\ ] ] where an ` edge ' is the line common to two adjacent slices , and is the euler characteristic of the point at the origin ( see figure [ fig : slice ] ) . ) . in this example , and .,width=302 ] we can now apply eq .( [ eq : euler ] ) to both and and estimate the corresponding coefficients as was done before , using only simulations of a single slice of the sky . following this procedurewe obtain for this example with slices from 40 background simulations , and .using ( [ eq : euler_slice ] ) this leads to the full sky coefficients and , a result which is consistent with the full sky simulation procedure .this demonstrates that the -value can be accurately estimated by only simulating a small portion of the search space .the euler characteristic formula , a fundamental result from the theory of random fields , provides a practical mean of estimating a -value while taking into account the `` look elsewhere effect ''. this result might be particularly useful for experiments that involve a search for signal over a large parameter space , such as high energy neutrino telescopes .while the example considered here deals with a search in a 2-dimensional space , the formalism is general and could be in principle applied to any number of search dimensions .for example , if one is trying to detect a ` burst ' event then time would constitute an additional search dimension .in such case the method of slicing could be useful as well , as one will not have to simulate the entire operating period of the detector but only a small slice of time ( provided that the background does not vary in time ) .thus , the computational burden of having to perform a very large number of monte carlo simulations in order to to estimate a -value , could be greatly reduced .we thank jim braun and teresa montaruli for their help in providing us the background simulation data of icecube which was used to perform this analysis .one of us ( e. g. ) is obliged to the minerva gesellschaft for supporting this work .davies , _ hypothesis testing when a nuisance parameter is present only under the alternative ._ , biometrika * 74 * ( 1987 ) , 33 - 43 .e. gross and o. vitells , _ trial factors for the look elsewhere effect in high energy physics _ , eur . phys .j. c , * 70 * ( 2010 ) , 525 - 530 . r.j .adler and a.m. hasofer , _ level crossings for random fields_,ann .probab . * 4 * , number 1 ( 1976 ) , 1 - 12 .adler , _ the geometry of random fields _ , new york ( 1981 ) , wiley , isbn : 0471278440 .worsley , s. marrett , p. neelin , a.c .vandal , k.j .friston and a.c .evans , _ a unified statistical approach for determining significant signals in location and scale space images of cerebral activation _, human brain mapping * 4 * ( 1996 ) 58 - 73 .adler and j.e .taylor , _ random fields and geometry _ , springer monographs in mathematics ( 2007 ) .isbn : 978 - 0 - 387 - 48112 - 8 .j. ahrens et al . and the icecube collaboration , astropart . phys .* 20 * ( 2004 ) , 507 .j. braun , j. dumma , f. de palmaa , c. finleya , a. karlea and t. montaruli , _methods for point source analysis in high energy neutrino telescopes _ , astropart .* 29 * ( 2008 ) 299 - 305 [ arxiv:0801.1604 ] . | in experiments that are aimed at detecting astrophysical sources such as neutrino telescopes , one usually performs a search over a continuous parameter space ( e.g. the angular coordinates of the sky , and possibly time ) , looking for the most significant deviation from the background hypothesis . such a procedure inherently involves a `` look elsewhere effect '' , namely , the possibility for a signal - like fluctuation to appear anywhere within the search range . correctly estimating the -value of a given observation thus requires repeated simulations of the entire search , a procedure that may be prohibitively expansive in terms of cpu resources . recent results from the theory of random fields provide powerful tools which may be used to alleviate this difficulty , in a wide range of applications . we review those results and discuss their implementation , with a detailed example applied for neutrino point source analysis in the icecube experiment . look - elsewhere effect , statistical significance , neutrino telescope , random fields |
in recent years , the study of meteors received significant input primarily due to the campaigns to observe the leonid meteor storms ( jenniskens et al . 2000 ) .the new studies introduced novel observational methods as well as new analysis techniques . among those , the analysis of the light curves of visible meteors remains one of the more widespread techniques .the accepted picture regarding the light production by a meteor is that collisions between the ablated meteor atoms and atmospheric molecules and ions are responsible for this phenomenon . for small meteoroids ,the light production is proportional to the loss of kinetic energy by the body .as the meteoroid velocity remains almost constant throughout the luminous phase , it follows that the light production tracks the instantaneous mass loss by ablation .a general review of meteor light curve analysis was presented by hawkes et al .they discussed six improvements to traditional meteor light curve analysis in order to provide higher resolution and better information : use of generation iii image intensifier technology , digital recording techniques , digital image processing algorithms which separate the even and odd video fields , whole - image background subtraction , analysis of pixel - by - pixel high resolution meteor light curves , and utilization of coincidence and correlation techniques .however , they did not introduce novel analysis methods of the higher quality data .the classical light curve ( lc ) produced by a solid , compact , and non - fragmenting meteoroid should be smooth and exhibit its maximum luminosity near the end of the trail ( cook 1954 ) .this is the combination of the exponential increase in air density as the meteoroid penetrates into deeper atmospheric layers , and of the reduction in the surface area presented by the meteoroid to the airflow as the ablation proceeds .however , recent measurements indicate that this picture of single - body ablation may not be correct . in many cases ,faint meteors were shown to produce lcs . a basic question concerning the behavior of the light produced by ablating meteors is therefore whether this process involves at all stages a single object , or whether during the production of light the meteoroid disintegrates in a rather large number of grains ( the `` dustball '' model : hawkes & jones 1975 ) .this model assumes that the meteoroid is composed of numerous small grains with a high melting point temperature , held together by a low melting point glue .fisher et al .( 2000 ) followed hawkes & jones ( 1975 ) and suggested that most meteoroids are collections of hundreds to thousands of fundamental minute grains , at least some of which are released prior to the onset of intensive ablation .one would expect these grains , unless extremely uniform in physical properties , to become aerodynamically separated during atmospheric entry , and therefore to produce a `` wake '' , which is defined as some instantaneous meteor light production from an extended spatial region .fisher et al . presented theoretical results for wake production as a function of grain mass distribution , height of separation , zenith angle and velocity .koten & borovicka ( 2001 ) analyzed 234 meteor lcs , among which there were 110 leonids from the 1998 and 1999 showers .one of their goals was the identification of relations between the lc shapes and other parameters . among these , they included the leading and the trailing slopes of the lcs , defined as linear relations between the beginning of the lc and its maximum , and between the maximum and the terminal point of the lc .bellot rubio et al ( 2002 ) studied photographic light curves of relatively bright meteors with magnitudes in the range from + 2.5 to -5 , collected by jacchia et al .( 1967 ) with super schmidt observations , in order to derive the average density of meteors and to test whether the single - body theory of meteor evolution fits the observations better than that of continuous disintegration .velocities , decelerations and magnitudes were fitted simultaneously to synthetic light curves , and the ablation coefficient , the shape - density coefficient and the pre - atmospheric mass of each individual meteoroid were determined .bellot rubbio et al . could not confirm the large meteor density values determined from the quasi - continuous fragmentation models , essentially supporting the single - body ablation model .babadzhanov ( 2002 ) reached opposite conclusions , supporting the continuous fragmentation model , from an analysis of 111 photographic light curves of meteors .similar conclusions , that is a preference for continuous disintegration during re - entry of meteors , were reached by jiang & hu ( 2001 ) from an analysis of high spatial resolution meteor light curves obtained during the 1998 and 1999 leonid showers .light curves of leonid meteors collected in recent years were analyzed by murray et al .the meteors they concentrated on were fainter than those discussed by bellot rubbio et al .( 2002 ) , of 6 - 8 mag , and the observations were performed with intensified ccd video cameras with fields of view from 16 to 40 .murray et al .concentrated on the modified _f _ parameters and used these to distinguish among differences in overall light curve shapes .the _ f _ parameter , to be explained below , ranged between 0.49 and 0.66 for individual annual showers ( where _ _ f__=0.5 is defined as a symmetrical curve , __ f__.5 an early skewed one , and _ _f__.5 a late skewed curve ) .the findings indicate morphological differences between leonid meteoroids observed in different years , thus originating from different ejection epochs .murray et al . also noted the presence of very distinctive ( but not quantified ) features among yearly events , with the 1998 leonid light curves characterized by early skewed shapes while those from 1999 showed unusual `` flat topped '' curves .a similar conclusion was reached by koten & borovicka ( 2001 ) . in this , and in all the previously mentioned publications , the lcs were discussed almost exclusively in reference to the single quantitative parameter , the skewness of the lc ( fleming et al .1993 ) .the ungluing of a complex of micro - meteoroids , with the subsequent formation of a classical trail , the epitome of the fragmentation model , was shown to fit the vhf radar observations of geminid meteors ( campbell - brown & jones 2003 ) .the interesting finding of this study is the altitude at which the radius of the trail is zero , and which can be interpreted as the region where the ungluing process begins : approximately 240-km for the geminids .this fits the region where the high altitude radar echoes connected with meteor activity were detected by the israeli l - band radar ( brosch et al .2001 ) .in addition , the light produced by many meteors was noted to be far from `` well - behaved '' and steady .smith ( 1954 ) computed a simple model to account for the sudden brightening of meteor light curves .these flares cause the brightening of the meteor by 1 - 2 mag and , as explained by smith , are the result of the release of a few thousand particles from the original single - body meteor .one explanation for this phenomenon , put forward by kramer & gorbanev ( 1990 ) , is that this flaring is produced by the shedding and spraying of a liquid film formed on the leading surface of the ablating meteor .they described a phenomenon by which the light intensity from the meteor is , then shows a sudden drop ( depression ) , after which the meteor flares in brightness . therefore , meteor lcs may show both sudden brightening episodes , as well as sudden dimmings .kramer & gorbanev remarked that meteors showing a depression in their lc have the brightness maximum earlier , on average , than meteors with no depression . in terms of the skewness parameter , such meteors would then be classified as `` early skewers '' .it seems that in order to discuss statistical properties of meteors it is necessary to use descriptors that would reduce the amount of data characterizing a single lc while providing quantitative measures .the descriptors could be the symmetry parameter described above and used extensively , or the leading and trailing slopes as described by murray et al .( 1999 ) and by koten & borovicka ( 2001 ). these could , in principle , refine the studies where tens to hundreds of lcs are collected and simultaneously analyzed , by reducing each lc to a small set of consistent numbers .the present paper presents an exploration of measurable parameters in the context of the examination of meteor lcs collected in israel during the 2002 leonid shower . a first attempt was already made in our contribution describing lcs of leonids 2001 , geminids 2001 , and perseids 2002 ( brosch et al .2002 ) , where the pointedness parameter_ p _ was introduced ( see below ) .the 2002 leonid shower was analyzed by arlt et al .( 2002 ) with the following preliminary characteristics .the activity was due to two dust trails , one of cometary dust ejected seven revolutions ago in 1767 that produced a peak zhr of 2510 and was seen in asia and europe , and the other by dust ejected four revolutions ago in 1866 that produced a zhr of 2940 and was seen in the americas .the first peak took place on november 19 at 04:10 ut with a full - width at half - maximum ( fwhm ) of 39 minutes .the second was in the same day at 10:47 ut with a fwhm of 25 minutes .our observations described below covered the period of the rise toward the twin peaks and a portion of the decay from the peak activity , but neither of the peaks themselves . herewe expand the discussion to include more parameters and to explore the internal correlations they might show for the case of the meteors observed in november 2002 during the leonid shower by using classical statistical techniques .this approach is novel in the study of meteor lcs and may prove useful in uncovering hidden connections among the measured parameters , as many lcs are collected and uniformly analyzed in this manner .we describe our observations in section 2 , the data reduction in section 3 , the analysis and results in section 4 , and conclude in section 5 .starting in november 1998 , the wise observatory ( wo ) is active in the observation of meteors . from 2001 onward ,these observations consist of intensified video ( iccd ) measurements , sometimes accompanied by l - band phased - array radar observations .we do not discuss here the radar observations ( e.g. , brosch et al .2001 ) , but concentrate exclusively on the analysis of the light curves ( lcs ) derived from the intensified video observations .the observations reported and analyzed here were collected during five nights , from november 15 - 16 to november 19 - 20 2002 , using mobile meteor detection systems .each system is based on an itt night vision 18-mm , iii - generation image intensifier ( it ) with a gaas cathode , supplied by collins electro - optics .the it is optically - coupled to an astrovid 2000 ccd video camera operating in the pal tv standard ( 50 interlaced half - frames per second , or a 20 msec exposure for each half - frame ) .the it is illuminated by a 50-mm f/0.95 navitar lens and provides a final imaged field of 6 degrees .the astrovid 2000 sends the video stream to a digital hi8 video recorder equipped with a date / time stamper and , in parallel , to a matrox meteor ii frame grabber card mounted in the docking bay of a compaq armada 850 computer .the log of observations is given in table 1 and shows the number of meteors recorded by the observing stations during each individual night as well as the total number of detected meteors ..meteor observations , by night and by camera [ cols="<,^,^,^",options="header " , ] note that beech & murray ( 2003 ) calculated also the effect of the zenith angle on the light curve .their figure 7 displays this effect for the =1.73 model with the following resuts : z=60 ; _ _ f__=0.48 ; z=45 : _ _ f__=0.45 , _ _ p__=0.87 ; and z=0 : _ _ f__=0.45 , __ p__=0.70 .as mentioned above , their lcs are cut off at high altitude presumably by the evolution beginning at 120 km ; the effect is primarily causing high values for _ p _ and the true _p _ values can not be easily recovered .we presented an objective exploration of some measurable parameters of meteor light curves based on a uniform collection of lc from the 2002 leonid shower .the lcs belong mostly to leonid and sporadic meteors and this allowed an inter - comparison of the properties of these two groups .the analysis confirmed the reduced importance of the symmetry parameter used in many previous lc analyses , and showed that , at least for te meteor lcs studied here , the most important parameters are the duration , the skewness , and the brightness amplitude . comparisons of the distributions of measured parameters between leonid and sporadic meteors indicated that the leo lcs are more uniform than the spo ones .the statistical analysis showed that short - duration meteors have stronger sudden brightening and/or dimming episodes ( flares ) than long - duration ones .the long duration meteors have shallower general brightening and dimming stages .light curves with early skews , that reach their maximal brightness before the middle of their luminous phase , tend to have lcs with steep slopes .a pca showed that , at least for the meteors observed in 2002 , three principal components suffice to describe most of the variation observed among the lcs .the components are related ( in order of importance ) to the duration of a trail , its brightness amplitude , and its amount of skewness .the `` flaring '' activity presented by a meteor train seems to be related both to the duration of the trail and to its brightness amplitude .an attempt to compare the meteor lcs characterized here with theoretical models was not very successful because of a lack of published model light curves and of characterizing parameters , such as those defined and tested here .however , the use of both _ f _ and _ p _ in comparing the observations with the models of murray et al .( 2000 ) indicated similar population indices as estimated for the 2002 leonids by arlt et al ( 2002 ) .a comparison with the more recent models of beech & murray ( 2003 ) yielded a population index of , rather different from that measured for the 2002 leonids , but probably affected by the lack of data describing the high altitude lc behaviour .we acknowledge support from the israel science foundation to study meteors in israel .discussions with mr .eran ofek and mr .ilan manulis , and the continuing help of the wise observatory staff in securing meteor observations , are appreciated .this paper has greatly benefited from the remarks of an anonymous referee .brosch , n. , helled , ravit , polishook , d. , schijvarg , s. & manulis , i. 2002 , in `` asteroids , comets , meteors - acm 2002 '' .( barbara warmbein , ed . ) esa sp-500 .noordwijk : esa publications division , p. 209fleming , b. d. e. , hawkes , l. r. & jones , j. 1993 , in `` meteoroids and their parent bodies '' ( j. stohl and i.p .williams , eds . ) , bratislava : astronomical institute , slovak academy of sciences , 1993 , p.261 murray , i.s . , beech , m. , taylor , m.j . , jenniskens , p. & hawkes , r.l .2000 , in `` leonids storm research '' ( p. jenniskens , f. rietmeijer , n. brosch , and p. fonda , eds . ) , dordrecht : kluwer academic publishers , p. 351 | we investigate a uniform sample of 113 light curves of meteors collected at the wise observatory in november 2002 during a campaign to observe the leonid meteor shower . we use previously defined descriptors , such as the classical skewness parameter and a recently - defined pointedness variable , along with a number of other measurable or derived quantities , in order to explore the parameter space in search of meaningful light curve descriptors . in comparison with previous publications , we make extensive use of statistical techniques to reveal links among the various parameters and to understand their relative importance . in particular , we show that meteors with long - duration trails rise slowly to their maximal brightness and also decay slowly from the peak , while showing milder flaring than other meteors . early skewed meteors , with their peak brightness in the first half of the light curve , show a fast rise to the peak . we show that the duration of the luminous phase of the meteor is the most important variable differentiating among the 2002 meteor trails . the skewness parameter , which is widely used in meteor light curve analyses , appears only as the second or third in order of importance in explaining the variance among the observed light curves , with the most important parameter being related to the duration of the meteor light - producing phase . we suggest that the pointedness parameter could possibly be useful in describing differences among meteor showers , perhaps by being related to the different compositions of meteoroids , and also in comparing observations to model light curves . we compare the derived characteristics of the 2002 meteors with model predictions and conclude that more work is required to define a consistent set of measurable and derived light curve parameters that would characterize the light production from meteors , and suggest that meteor observers should consider publishing more characterizing parameters from the light curves they collect . theorists describing the light production from meteors should present their results in a form better compatible with observations . meteors , light curves , statistical analysis |
reaction - diffusion ( rd ) systems are inherent in many branches of physics , chemistry , biology , ecology etc .the review of the theory and applications of reaction - diffusion systems one can find in books and numerous articles ( see , for example ) .the popularity of the rd system is driven by the underlying richness of the nonlinear phenomena , which include stationary and spatio - temporary dissipative pattern formation , oscillations , different types of chemical waves , excitability , bistability etc .the mechanism of the formation of such type of nonlinear phenomena and the conditions of their emergence have been extensively studied during the last couple decades .although the mathematical theory of such type of phenomena has not been developed yet due to the essential nonlinearity of these systems , from the viewpoint of the applied and experimental mathematics , the pattern of possible phenomena in rd system is more or less understandable . in the recent years, there has been a great deal of interest in fractional reaction - diffusion ( frd ) systems which from one side exhibit self - organization phenomena and from the other side introduce a new parameter to these systems , which is a fractional derivative index , and it gives a greater degree of freedom for diversity of self - organization phenomena . at the same time , the process of analyzing such frd systems is much complicated from the analytical and numerical point of view . in this article , we consider two coupled reaction - diffusion systems : the first one is the classical system where ; , two variables , , are the nonlinear sources of the system modeling their production rates , characteristic times and lengths of the system , is an external parameter and the other model is the fractional rd system with the same parameters and fractional derivatives on the left hand side of equations ( [ 3]),([4 ] ) instead of standard time derivatives , which are the caputo fractional derivatives in time of the order and are represented as the article is devoted to the second problem and the first one we need for comparing the obtained results with classical one. equations ( [ 3]),([4 ] ) at correspond to standard rd system described by equations ( [ 1]),([2 ] ) . at describe anomalous sub - diffusion and at - anomalous superdiffusion . in this paper, we always assume that the following conditions are fulfilled on the boundaries 0; : \(i ) neumann : \(ii ) periodic : where of the steady - state constant solutions of the system ( [ 3]),([4 ] ) correspond to homogeneous equilibrium states can be analyzed by linearization of the system nearby this solution . in this casethe system ( [ 3])([4 ] ) can be transformed to linear system where ( all derivatives are taken at homogeneous equilibrium states ( condition ( [ e ] ) ) . by substituting the solution into frd system ( [ 3]),([4 ] )we can get the system of linear ordinary differential equations ( [ lin ] ) with the matrix determined by the operator , the stability conditions of which are given by eigenvalues of this matrix .let us analyze the stability of the solution ( [ e ] ) of the linear system with integer derivatives and find the conditions of this instability ( see , for example: ) .we repeat this process in order to compare the results obtained with the results of the fractional rd system considered in this article . in this case , by searching for the solution of the linear system in the form , we get a homogeneous system of linear algebraic equations for constants .the solubility of this system leads to the characteristic equation where is the identity matrix . as a result ,the characteristic equation takes on a form of a simple quadratic equation the linear boundary value problem for rd system ( [ 1]),([2 ] ) is unstable according to inhomogeneous wave vectors if ( ) , \label{tu}\ ] ] ( turing bifurcation ) and according to homogenous ( ) fluctuations ( hopf bifurcation ) if ( ) for analyzing the equations ( [ 3]),([4 ] ) let us also consider a linear system obtained near the homogeneous state ( [ e ] ) . as a result, the simple linear transformation can convert this linear system to a diagonal form where is a diagonal matrix of : eigenvalues are determined by the same characteristic equation ( [ aa ] ) with matrix ( [ a ] ) , is the matrix of eigenvectors of matrix . [cols="^,^ " , ] the systems have rich dynamics , including steady state dissipative structures , homogeneous and nonhomogeneous oscillations , and spatiotemporal patterns . in this paper, we focus mainly on the study of general properties of the solutions depending on the value of . as discussed in section 2 , there are two different regions in parameter , where the system can be stable or unstable . in the case of the steady state solutions in the form of nonhomogeneous dissipative structures are inherent to unstable region . figures [ rys6](a)-(c ) show the steady state dissipative structure formation and figures [ rys6 ] ( d)-(f ) present the spatio - temporal evolution of dissipative structures , which eventually leads to homogeneous oscillations . on the figure [ rys6](a)-(d ) , the value increases from to and on this whole interval the structures are in steady state .this is due to the case , the oscillatory perturbations are damping , and we can see that small oscillations are at the transition period with increasing , the steady state structures change to the spatio - temporary behavior ( figure [ rys6](e)-(f ) ) .the emergence of homogeneous oscillations , which destroy pattern formation ( figure [ rys6](e),(f ) ) has deep physical meaning .the matter is that the stationary dissipative structures consist of smooth and sharp regions of variable , and the smooth shape of .the linear system analysis shows that the homogeneous distribution of the variables is unstable according to oscillatory perturbations inside the wide interval of which is much wider then interval ( ) . at the same time , smooth distributions at the maximum and minimum values of are correspondingly . in the first approximation, these smooth regions of the dissipative structures resemble homogeneous ones and are located inside the instability regions . as a result ,the unstable fluctuations lead to homogeneous oscillations , and the dissipative structures destroy themselves .we can conclude that oscillatory modes in such type fodes have a much wider attraction region than the corresponding region of the dissipative structures .for a wide range of the parameters , the numerical solutions of the brusselator problem show similar behavior ( figure [ rys7](a)-(d ) ) .the stationary solutions emerge practically in the same way . at small , we see aperiodic formation of the structures , and approaching , the damping oscillations of the dissipative structures arise . at non - stationary structures arise ( figure [ rys7](e ) ) . in this case, the dissipative structures look quite similar to those we obtained for regular system .the increase of leads to a larger amplitude of pulsation .all these patterns are robust with respect to small initial perturbations .the further increase of leads to spatially temporary chaos ( figure [ rys7](f ) ) .in the contrast to previous case , such nonhomogeneous behavior is stable and does not lead to homogeneous oscillations .the matter is that in brusselator model , the dissipative structures are much grater in amplitude and do not have smooth distribution at the top .it should be noted that the pulsation phenomena of the dissipative structures is closely related to the oscillation solutions of the ode ( figures [ rys4 ] , [ rys5 ] ) .moreover , the fractional derivative of the first variable has the most impact on the oscillations emergence .it can be obtained by performing a simulation where the first variable is a fractional derivative and the second one is an integer .it should be emphasized that the distribution of , within the solution , only shows a small deviation from the stationary value ( that is why this variable is not represented in the figures ) .in this article we developed a linear theory of instability of reaction diffusion system with fractional derivatives .the introduced new parameter marginal value plays the role of bifurcation parameter .if the fractional derivative index is smaller than , the system of fodes is stable and has oscillatory damping solutions . at ,the fodes becomes unstable , and we obtain oscillatory or even more complex - quasi chaotic solutions . in addition , the stable and unstable domains of the system were investigated . by the computer simulation of the fractional reaction - diffusion systems we provided evidence that pattern formation in the fractional case , at less than a certain value , is practicably the same as in the regular case scenario . at ,the kinetics of formation becomes oscillatory . at ,the oscillatory mode arises and can lead to homogeneous or nonhomogeneous oscillations . in the last case scenario ,depending on the parameters of the medium , we can see a rich variety of spatiotemporal behavior .m. o. vlad and j. ross ._ systematic derivation of reaction - diffusion equations with distributed delays and relations to fractional reaction - diffusion equations and hyperbolic transport equations : application to the theory of neolithic transition_. phys .e 66 , 061908 ( 2002 ) .v.v.gafiychuk , b.y.datsko , yu.yu.izmajlova .analysis of the dissipative structures in reaction - diffusion systems with fractional derivatives .metody ta phys .- mech .v.49 , # 4 , 2006 , pp .109 - 116 ( in ukrainian ) k.d.oldham and j.spanier _ the fractional calculus : theory and applications of differentiation and integration to arbitrary order , vol.111 of mathematics in science and engineering _ academic press , new york , 1974 .v.v.gafiychuk , b.s.kerner , v.v.osipov , t.m.scherbatchenko .formation of pulsating thermal - diffusion autosolitons and turbulence in a nonequilibrium electron - hole plasma .phys . sem .v.25 , # 11 , ( 1991 ) ( translation of fiz.tekh .poluprovodn ( ussr ) .v.25 , no.11 , ( 1991 ) , pp .16961702 ) .v.v.gafiychuk , a.v.demchuk .analysis of the dissipative structures in gierer - meinhardt model .matematicheskie metody i physico - mechanicheskie polia .v.40 , # 2 , 1997 , pp.48 - 53 ( in ukrainian , english translation in journal of mathematical sciences , v.88 , # 4 , 1998 ) . | this paper is concerned with analysis of coupled fractional reaction - diffusion equations . it provides analytical comparison for the fractional and regular reaction - diffusion systems . as an example , the reaction - diffusion model with cubic nonlinearity and brusselator model are considered . the detailed linear stability analysis of the system with cubic nonlinearity is provided . it is shown that by combining the fractional derivatives index with the ratio of characteristic times , it is possible to find the marginal value of the index where the oscillatory instability arises . computer simulation and analytical methods are used to analyze possible solutions for a linearized system . a computer simulation of the corresponding nonlinear fractional ordinary differential equations is presented . it is shown that the increase of the fractional derivative index leads to the periodic solutions which become stochastic at the index approaching the value of 2 . it is established by computer simulation that there exists a set of stable spatio - temporal structures of the one - dimensional system under the neumann and periodic boundary condition . the characteristic features of these solutions consist in the transformation of the steady state dissipative structures to homogeneous oscillations or space temporary structures at a certain value of fractional index . reaction - diffusion system , fractional differential equations , oscillations , dissipative structures , pattern formation , spatio - temporal structures 37n30 , 65p40 , 37n25 , 35k50 , 35k45 , 34a34 , 34c28 , 65p30 |
Subsets and Splits